Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Enabling Autonomous Driving

Abstract: ABSTRACT Methods and systems for enabling autonomous driving Embodiments herein disclose methods and systems for enabling autonomous driving for a vehicle, wherein the feed from a plurality of cameras is compressed and optimized and the feed and inputs from one or more sensors is processed using one or more on-board processors/circuits. The plurality of processors can use machine vision techniques to identify objects in the feed and issue commands for path planning and motion planning of the vehicle based on the processed data. Embodiments of the system disclosed herein utilizes pseudo-LiDAR technology and is interoperable such that it can be integrated with vehicles having various form factors. Embodiments of the system also allow for remote control of the vehicle. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 January 2021
Publication Number
29/2022
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
paralegal@arcticinvent.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-07
Renewal Date

Applicants

Flo Mobility Private limited
F-705, Springfields apartment, Ambalipura, Sarjapura road, Bangalore, Karnataka - 560102.

Inventors

1. Angad Singi
201-Vyankatesh 1, Salasar Vihar Colony, Opp. Bharat Petrol Pump, Wathoda Ring Road, Nagpur, Maharashtra, India - 440035
2. Manesh Jain
F-705, Springfields Apartment, Ambalipura, Sarjapura road, Bangalore, Karnataka - 560102.

Specification

DESC:The following specification describes the invention:-
CROSS REFERENCE TO RELATED APPLICATION
This application is based on and derives the benefit of Indian Provisional Application 202141002993 filed on 21/01/2021, the contents of which are incorporated herein by reference.

TECHNICAL FIELD
[001] Embodiments disclosed herein relate to autonomous driving and more particularly to providing methods and systems for enabling autonomous driving.

BACKGROUND
[002] Currently, there are autonomous vehicle solutions available, which typically have high processing requirements, which may not be feasible in all vehicles.
[003] Other solutions are there which use backend resources (such as the Cloud, backend servers) for processing the data required for enabling autonomous driving. However, this requires heavy bandwidth requirements to exchange data with the backend, which may be due to reasons such as lack of network resources, network congestion, and so on.
[004] Further, currently available autonomous driving solutions are typically tied to the form factor of the vehicle. For example, an autonomous driving solution designed for a four wheeled vehicle (such as, a car, a truck, a bus, and so on) cannot be applied to a two wheeled vehicle (such as a scooter, motorbike, and so on).
[005] Various autonomous vehicle solutions use LiDAR technology as sensors for measuring the distance between the vehicle and at least one object, however, the implementation of LiDAR technology is very expensive.

OBJECTS
[006] The principal object of embodiments herein is to disclose methods and systems for enabling autonomous driving for a vehicle, wherein the feed from a plurality of cameras and the feed and inputs from one or more sensors is processed using one or more on-board processors/circuits.
[007] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating at least one embodiment and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.

BRIEF DESCRIPTION OF FIGURES
[008] Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[009] FIG. 1 depicts an overview of a system for enabling autonomous driving in a vehicle, according to embodiments as disclosed herein;
[0010] FIG. 2 depicts a cloud stack and a edge stack of the system, according to embodiments as disclosed herein;
[0011] FIG. 3 depicts the functioning of a vision module, according to embodiments as disclosed herein;
[0012] FIG. 4 depicts the information flow and medium of communication between the various modules of the system, according to embodiments as disclosed herein;
[0013] FIG. 5 is an example diagram of a mission planner process, according to embodiments as disclosed herein;
[0014] FIG. 6 is an example diagram of how local maps in various vehicles are stitched and converted into central map available to be consumed by other vehicles in the network, according to embodiments as disclosed herein;
[0015] FIG. 7 is an example diagram of a dynamic kinematics control module of the system, according to embodiments as disclosed herein;
[0016] FIG. 8 is an example diagram illustrating how the teleoperations (tele-ops/teleops) and autonomous systems interact with the vehicle, according to embodiments as disclosed herein;
[0017] FIG. 9 is an example diagram depicting the various states of the vehicle when integrated with the system, according to embodiments as disclosed herein;
[0018] FIG. 10 is an example block diagram of the system when integrated with the vehicle, according to embodiments as disclosed herein; and
[0019] FIG. 11 is an example block diagram illustrating the power transfer within the system, according to embodiments as disclosed herein.

DETAILED DESCRIPTION
[0020] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0021] The embodiments herein achieve methods and systems for enabling autonomous driving for a vehicle. Referring now to the drawings, and more particularly to FIGS. 1 through 11, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
[0022] Embodiments herein disclose methods and systems for enabling autonomous driving for a vehicle, wherein the feed from a plurality of cameras and the feed and inputs from one or more sensors is processed using one or more on-board processors/circuits, and the on-board processor issues commands that result in the actuation of the vehicle. The autonomous systems disclosed in the embodiments herein can be vision-based, can include a plurality of processors, and can allow for remote control (manual) of an autonomous vehicle. An advantage with the system is that it can be interoperable, wherein the functionality of the system is not limited by the form factor of the vehicle that the system is integrated with.
[0023] FIG. 1 depicts an overview of a system 100 for enabling autonomous driving in a vehicle, according to embodiments as disclosed herein. The vehicle as disclosed herein can be any vehicle, such as, but not limited to, four wheeled vehicles, three wheeled vehicles, two wheeled vehicles, delivery bots, rescue bots, trikes, and so on. The system 100, as depicted, comprises a plurality of sensors (such as, but not limited to, Cameras 101, Sonar, Inertial Measurement Unit (IMU), Global Position System (GPS), Infrared (IR),, Radio Detection and Ranging (RADAR), ambient sounds, odometer, Electronic Control Unit (ECU), control units, and so on) that can be aggregated (also referred to as sensor aggregation) and a processing module 103. The processing module 103 can receive the data/feed from the plurality of sensors. The feed from the plurality of cameras 101 can be compressed to reduce the data transfer load. The feed can be compressed using one or more compression protocols, such as, but not limited to, real-time communication compression protocol or lossless video streaming protocol. In an embodiment herein, the cameras 101 can be monocular cameras or stereo cameras. The surround cameras 101 can give the feed in red-green-blue (RGB), and through a monocular depth detection model, where depth can be taken out from the feed. It is to be noted that the processing module 103 can comprise one or more processors.
[0024] The feed from the plurality of cameras 101 can be optimized to ensure that the feed is relayed even in patchy network conditions. The feed optimization process can involve modifying one or more properties of the frame such as, but not limited to, frame rate, feed resolution and frame skip logic, and so on, to optimize the feed.
[0025] The compressed and optimized feed can be relayed to a remote location using a communication interface. The communication interface can use one or more wireless communication interfaces for communicating with external entities, such as, but not limited to, a server, and the Cloud. The processing module 103 can detect one or more issues with the network by looking at the ping. Examples of the issues can be, but not limited to, network interruptions, bandwidth changes, better network conditions on another network not being currently used, and so on.
[0026] The processing module 103 can also perform the processes of localization and mapping, and mission planning (path planning and motion planning) and control. The process of localization can involve determining the present location of the vehicle. This can be facilitated by the data from the GPS sensor or any other geo-location sensor. The process of mapping can involve determining the destination for the vehicle based on a map that can be generated of the vehicle environment, and as such can be dependent on the location of the vehicle. One part of the mission planning process can be the path planning, which can involve determining the route to be taken by the vehicle to arrive at its destination. Another part of the mission planning process can be the motion planning, which can involve determining the motion of the vehicle along certain areas of the route. The mission planning process can include a variety of tasks that can be performed en route, and can also depend on a set of rules that can be stored in a rules database, and the rules database can be stored in the cloud. A user can input details regarding the mission to be undertaken and the various rules, and these details can be dependent on the location of the vehicle and the mapping of the vehicle. Based on the location of the vehicle, mapping of the vehicle, and the destination of the vehicle, the control process can be executed where a command is issued that can be directed to an actuation layer 105 (also referred to as actuator) for a specific movement of the vehicle. The actuation layer can comprise a motor and a motor driver.
[0027] A primary processor in the processing module 103 can receive data from the plurality of sensors and generate a grid map using suitable tools such as, but not limited to ‘slam_toolbox’. The primary processor can also obtain precise location data through suitable techniques such as, but not limited to, information fusion of the cameras, IMU, GPS, and odometer, using extended or many layers of filtering algorithms. Examples of techniques other than information fusion can be statistical, machine learning, end-to-end artificial neural network (ANN), and other artificial intelligence (AI)-based approaches such as, but not limited to, maximum weighted voting with meta/master program. In an embodiment herein, Rao-Blackwellised particle filters (RBPF) Kalman can be used for linear systems. In another embodiment herein, high dimensional non-linear recursive-estimators can be used.
[0028] A secondary processor in the processing module 103 can enable the processing module 103 to perform dynamic path planning, dynamic mission, global and local path planning, and subsequently, generate one or more safe trajectories to execute the path traversal. The processing module 103 can use multi-core parallel programming with GPU acceleration to achieve the dynamic path planning using suitable techniques (such as, but not limited to, Dijkstra’s algorithm), which can ascertain the path to be traversed by the vehicle. The processing module 103 can perform Visual Inertial SLAM (Simultaneous Localization and Mapping) by sensor fusing IMU and Visual Odometry for the localization process. The processing module 103 can perform dynamic path planning by superimposing sensor fusion data from global reference and local reference sensors. The processing module 103 can obtain the global map frame from a geo-location sensor. The geo-location sensor can use one or more suitable means for determining the location of the vehicle, such as, Global Positioning System (GPS), Galileo, GLONASS, NAVIC, Gagan, Beidou, triangulation techniques, and so on. The processing module 103 can obtain the local map frame by fusing IMU and odometry data, using techniques, such as, but not limited to, kalman filtering, host of probabilistic filter and particle and/or deterministic methods. The processing module 103 can apply a matrix transformation on the two data streams to compute the final position of the vehicle with reference to the map.
[0029] The processing module 103 can comprise one or more emergency triggers for actuators, if a primary decision is delayed or inaccurate. The processing module 103 can comprise of logic built upon multiple scenarios, when the vehicle executes an emergency protocol. For example, a timeout can occur, when there is no command data received by the vehicle within a predefined time period (for example, 0.3 seconds), the vehicle brakes are applied automatically. For example, on the processing module 103 detecting an object within a predefined distance (for example, 100 centimeters) using information provided by one or more object detection sensors, the processing module 103 can automatically instruct control units present in the vehicle to apply the brakes and the processing module 103 replans the path to continue with the movement of the vehicle, while avoiding the obstacle.
[0030] The primary processing unit in the processing module 103 can detect signals (such as acceleration, braking, steering, light on/off, side indicators, and so on) using data received from the sensors and/or the cameras. The processing module 103 can relay these detected signals to corresponding hardware for execution.
[0031] On one or more predefined events occurring, the processing module 103 can save a log in a suitable location (such as an internal memory, an external memory, the Cloud, a server, a file server, a data server, and so on). The saved log can comprise information related to the event, such as, but not limited to, details of the event, corresponding timestamps, concurrently occurring events, and so on. For example, the motion of the vehicle along a path having potholes can be different from the motion of the vehicle along a clear path. As a result, logs can be generated that can provide information regarding the irregularity in the motion of the vehicle along the coordinates having the potholes, which can then alert a teleoperator about the presence of potholes the next time the vehicle is travelling through those same coordinates.
[0032] When an exception occurs, the processing module 103 can save a log with that exception, details associated with the exception, and corresponding timestamps in a suitable location (such as an internal memory, an external memory, the Cloud, a server, a file server, a data server, and so on). The processing module 103 can perform event logging and exception handling. In an example herein, the processing module 103 can generate logs using “logging” library and stored in a suitable location (such as an external memory, the Cloud, a server, a file server, a data server, and so on), which can be accessible over the internet using ‘ssh’. The processing module 103 can classify the logs into ‘warning’ and ‘info’ level and higher level logs are relayed to the suitable location using a suitable format (such as, but not limited to, a json format).
[0033] The processing module 103 can have inbuilt exception handling, wherein each sub-system/module present in the processing module 103 is able to perform one or more tasks independently, while also being seamlessly integrated using a multi-threading process. For example, if any of the sensors stop giving data, the processing module 103 can continue performing based on inputs from other sensors. If the error is critical, the processing module 103 can execute one or more fallback safety protocols. For example, if a sensor fails, a maximal voting based decision-making process can ensure that the vehicle’s current task is not interrupted and can also facilitate a smooth recover from this failure.
[0034] While the on-board decision making can be orchestrated using edge computing in the processing module 103, a teleoperator located at the backend can administer the entire process and provide guidance to the vehicle by providing an auxiliary decision-making mechanism that can work in tandem with the on-board decision making. In an embodiment herein, the third-party operators can supersede the processing performed by the processing module 103 to handle corner cases and extraordinary situations.
[0035] Further, the teleoperators can relay signals generated internally in the vehicle by one or more sensors and/or control units (such as acceleration, braking, steering, light on/off, side indicators, and so on). These signals can be received and processed by a processor in the processing module 103 and relayed to corresponding hardware/module/ control unit for execution.
[0036] There can be one or more processors in the processing module 103, and they communicate with each other in high frequency. Any processor that communicates with each other can establish a handshake that the command has been received successfully. If the received command is not valid or is not found in the look-up table/command Set of the command-receiving processor, then the command-receiving processor can send a response to the command-sending processor that the received command is invalid, so that the command-sending processor can re-issue the command to the command-receiving processor..
[0037] While it may be preferable to use Snapdragon as the primary processing unit and Raspberry Pi as the secondary processing unit, it may be obvious to a person having ordinary skill in the art that any other suitable alternative may be used instead of Snapdragon as the primary processing unit and instead of Raspberry Pi as the secondary processing unit. Examples of suitable alternative primary processing units can be or based on Onion Omega2+, NVIDIA Jetson Nano Developer Kit [Specially designed for AI projects], ASUS Tinker Board S, ClockworkPi, Arduino Mega 2560, Rock64 Media Board, Odroid-XU4, Cortex, Allwinner, Exynos, BeagleBone, and so on. Examples of alternative secondary processing circuits can be Arduino, PocketBeagle, Le Potato, Banana Pi M64, Orange Pi Zero, VIM2 SBC by Khadas, NanoPi NEO2, Helios64 by Kobol, ODROID-HC2, ESP 8266, ARM and Cuda-based processors.
[0038] In an example herein, to accelerate the vehicle to 200 RPM, the processing module 103 can issue a command ACCELERATE_200 on its Bluetooth that can talk to a Nano Bluetooth inbuilt radio inbuilt into an Arduino.
[0039] In an example herein, to decelerate OR brake the vehicle by a certain factor (for example, 15%), the processing module 103 can issue a command BRAKE_15 to the Nano Bluetooth inbuilt radio inbuilt into the Arduino.
[0040] In an example herein, to steer the vehicle to its right by a certain degree (for example, 3 degrees), the processing module 103 can issue a command RIGHT_STEER_3 to the Nano Bluetooth inbuilt radio inbuilt into the Arduino.
[0041] In an example herein, to steer the vehicle to its left by a certain degree (for example, 5 degrees), the processing module 103 can issue a command LEFT_STEER_5 to the Nano Bluetooth inbuilt radio inbuilt into the Arduino.
[0042] In an example herein, to steer the vehicle straight that is equivalent to “resetting” OR “homing” the handlebar of the vehicle, the processing module 103 can issue a command RESET_STEER on its Bluetooth that can talk to the Nano Bluetooth inbuilt radio inbuilt into the Arduino.
[0043] It is to be noted that the above examples are not to be construed as limiting, as alternatives to Arduino may be used and the various issued commands may be communicated by means other than Bluetooth.
[0044] In an example herein, the processing module 103 can automatically trigger the right and left turn indicators, then the vehicle is steered right or left by issuing steering commands. Similarly, the brake light shall turn on automatically and the processing module 103 issues the braking command.
[0045] Embodiments herein have used the terms ‘processing circuit’, ‘processor’, ‘microcontroller’, ‘microchip’ interchangeably to refer to the one or more processing units (i.e., the primary processing unit and the secondary processing unit). While the embodiments of the system 100 disclosed herein may refer to a primary processor and a secondary processor, the system 100 can make use of one processor alone to perform the various functions disclosed herein.
[0046] The various modules within the processing module 103 may communicate with each other through wired means or wirelessly (such as, but not limited to, Bluetooth). The system 100 can also make use of a microcontroller other than Arduino.
[0047] The system 100 can make use of pseudo-LiDAR technology. The cameras 101 from the camera array can extract information about the objects surrounding a vehicle. An advantage with the pseudo-LiDAR technology is that it can allow for better range and depth of the images produced by the cameras 101 with machine-learning techniques. Another advantage is that it is less expensive than using LiDAR technology.
[0048] FIG. 2 depicts a cloud stack and an edge stack of the system 100, according to embodiments as disclosed herein. The cloud stack can include features for account management, rule engine storage, kinematics (also referred to as dynamic kinematics control), mission planning, central mapping, maintaining reports, and a HUD.
[0049] Account management can involve storing details about the vehicle or the user, such as who may be the user of the vehicle, details about a third party that can have administrator control (for remote operation) over the vehicle, a unique identification number associated with the vehicle, the kind of batteries or other components within the vehicle etc.
[0050] The rule engine can dictate the behaviour of the vehicles based on predefined algorithm, AI and machine learning, apriori algorithms together with real world environment perception. The rule engine can also incorporate manual teleoperation from human users into its decision-making process. The scope of the rules may include, but are not limited to, control, navigation, manipulation, risk mitigation, planning, and threat avoidance.
[0051] The dynamic kinematics control module (also referred to herein as kinematics) can be a platform agnostic high and mid-level vehicle controller that can automatically select the best control algorithm based on the drive mechanism, form factor, and physical embodiment of the vehicle.
[0052] The mission planner may be a high-level integrated cognition, planning, prediction, and multi-agent fleet management system that can accept user-defined mission goals and instruct the vehicle fleet to achieve the mission objectives in the most efficient and risk-averse manner.
[0053] The central mapping system can be a multi-modal perception system that can accept individual sub-maps from various vehicles operating in their respective workspace, and can combine them to generate a single globally metrically accurate geometric map. The central mapping module can register, co-align, stitch, correct, and update the individual maps in real-time.
[0054] The reports module can store details about the various missions undertaken by the vehicle, maintain logs that are generated by the processing module 103, feedback from the vehicle, and any other such information.
[0055] The head-up-display (HUD) 309 can allow for tele-ops control (remote control) of the vehicle. The feed from the cameras 101 and the depth extracted from the feed can be stitched on the HUD 309 to get volumetric 4D vector visualization, which can be a 3D representation of the world using cameras 101.
[0056] The edge stack can include the plurality of sensors, among which can be a plurality of cameras 101, an intel module 203, a vision module 205, and a connectivity module 207.
[0057] The connectivity module 207 can allow for communication with the system 100 through GSM, WiFi, and Bluetooth Low Energy.
[0058] The vision module 205, which essentially uses pseudo-LiDAR technology, can receive the feed from the cameras 101, and can perform a series of processes that can generate an output that can be equivalent to that of an output from a LiDAR sensor. The various processes within the vision module can be performed by the primary processor. However, it is to be noted that this should not be construed as limiting, as the processing module 103 may include a single processor or plurality of processors, where any processor among the plurality of processors can execute the various processes within the vision-based module. The output of the vision module 205 can be a RGB feed with depth extracted from the feed.
[0059] The intel module 203 can allow performance of the localization, mapping, mission planning (path planning and motion planning), and control processes based on an output from the plurality of sensors, output from the modules within the cloud stack (such as, but not limited to the rule engine), and output from the vision module 205. For example, the localization process can be performed with the help of an IMU sensor or any other sensor. The path planning process can be performed with the help of rules stored in the rule engine. For example, the rule engine may store a rule stating that certain areas along a route may be filled with potholes, and accordingly a path will be generated for the vehicle to move along a route where the potholes are avoided. The motion planning process can involve planning movement of the wheels of the vehicle, such as turning of the wheels at specific instances along the path (also used interchangeably with route). The control process can be performed by the intel module 203 where a command can be directed to the actuation layer 105, that can comprise a motor driver and a motor, for physical movement of the vehicle.
[0060] FIG. 3 depicts the functioning of the vision module 205, according to embodiments as disclosed herein. A feed multiplexer and queue unit (FMAQ) 301 within the vision module 205 can receive the feed from the plurality of cameras 101. The FMAQ 301 can receive the feed from the cameras 101 at a fixed rate of frames per second. The frame rate can vary from one camera 101 to another. An acquisition config 303 can control the prioritization of cameras 101, and can intelligently assign the priority based on driving conditions. For example, if the vehicle is moving in the left most lane, then a higher priority can be assigned to the right camera.
[0061] The output feed from the FMAQ 301 can be compressed and optimized by a compression unit 305, wherein the compression unit 305 can use a compression protocol for compressing the output feed. An example of a compression protocol that can be used is Web Real-Time Communication (WebRTC). The output feed from the FMAQ 301 can also be sent to a resizing unit 307 for resizing of the feed to a value that can enable the depth extraction engine 311 to extract depth from the output depth. Examples of depth estimation methods that can be used for extracting depth can be monocular depth or stereo depth. The combination of the FMAQ 301, compression unit 305, and resizing unit 307 can be included in a broker unit 300.
[0062] The depth extracted from the output feed by the depth extraction engine 311 can be in 3D, which can then be directed to a 3D Depth to 2D Depth conversion unit 313 that can perform a laser scan to transform the 3D depth to 2D form.
[0063] After the conversion process, the depth extraction engine 311 can receive the 2D depth extraction, and direct it to the compression unit 305, that can then direct the 2D extracted depth to the intel module 203 and HUD 309. The intel module 203 can make use of the 2D extracted depth in order to perform the mission planning process and the control process.
[0064] The vision module 205 can identify characteristics associated with each identified object within the feed, such as, but not limited to, shape of the object, distance of the object from the vehicle, and direction and speed of movement of the object, through machine vision techniques (such as, but not limited to, stereo depth estimation and monocular depth estimation). The vision module 205 can use one or more datasets (such as, but not limited to, KITTI, GTI, and custom datasets) to train object detection methodologies (such as, but not limited to, YoloV3, Faster Region-based Convolutional Neural Network, and so on). The monocular depth detection can be trained with open source data sets, inhouse collected data sets, and synthetic data sets, that can be generated from computer graphics or simulation software. The vision module 205 can train the models on progressive frames, using suitable techniques such as, but not limited to, sliding window and heatmap generation techniques, to compare and compute the direction of the vehicle movement.
[0065] FIG. 4 depicts the information flow and medium of communication between the various modules of the system, according to embodiments as disclosed herein. . The primary processing unit may have a Linux-Based Operating System (Linux Based OS) that can control most of the functions within the system 100. The Linux-Based OS can be a custom-built OS that can be made available in the primary processor. The Linux-Based OS can receive data from the cameras, and the sensor data, trajectory data, visual-inertial odometry data (VIO), localization data, and joystick data from the Socket and API module of the vehicle. A database (an example of which is Redis) can store the sensor data and can receive it from the Socket and API module. It is to be noted that the embodiments disclosed herein are not limited to using the Socket and API module for communication of the data from the cameras and sensors.
[0066] The data received and processed by the Linux-Based OS in the primary processing unit can be transmitted to the secondary processor, which can have a Robot Operating System (ROS). The ROS can allow for performance of the path planning, and motion planning and control processes. The ROS can be loaded on a Linux Operating System. It is to be understood that the primary processor having Linux-Based OS and the secondary processor having ROS is to be construed as non-limiting, as each processor could make use of different operating systems. It is also to be noted that the outlined functions performed by the primary processor and secondary processor is to be construed as non-limiting as the primary processor may be able to perform the functions of the secondary processor, and vice versa, or can be performed by one processor alone.
[0067] The ROS autonomous navigation stack (part of the ROS) can incorporate the move_base, ros_control packages together with the proprietary vision module 205. It can generate global path plans and can execute them using local cost maps and dynamic trajectory execution. The move_base module can generate cmd_vel instructions that can be executed by the low-level control module to actuate the vehicles into the desired states.
[0068] The ROS can send trajectory, cm_vel, and ultrasonic commands to the Linux Based OS, which in turn can send these commands to a microcontroller (an example of which is Arduino) that can be connected to the actuation layer 105 (motor driver and motors) and ultrasound units for performance of the functions associated with the respective commands.
[0069] The HUD 309 can communicate with the primary processor having Linux-Based OS so that information can be made available on the HUD 309, such as video feed, thereby informing a user of the driving environment. The HUD 309 can have three cameras and a cock-pit view.
[0070] The simulated environment can act as a proxy for the real world. It can have simulator sensors, simulated vehicle Kinematics and Physics. The simulator can be used for rapid prototyping control & Planning, Localisation and Mapping processes.
[0071] Communication between one or more components, for example between the primary processor and microcontroller, can take place through data transportation protocols such as, but not limited to, USB, CAN etc.
[0072] FIG. 5 is a diagram illustrating the operation of the mission planner process on a vehicle integrated with the system 100, according to embodiments as disclosed herein. Through a graphical user interface, the user can define a global mission on a predefined global map, that can be visible to the user on the HUD 309. The global mission defined by the user can include the destination that the user would like to travel to and/or the various tasks that could be performed en route. This global mission can be loaded to the vehicle, wherein the vehicle has a predefined local map. The vehicle may autonomously move and perform the tasks input by the user with the help of real-time localization using vision, GPS, and Odometry. The vehicle may also have a feature for dynamic obstacle avoidance. The end result of the mission planner process can be the completion of the global mission, which can be the vehicle’s arrival at the user’s destination and performance of the user-defined tasks.
[0073] FIG. 6 is a diagram illustrating how a predefined local map that is available on one vehicle be stitched to a different predefined local map on a second vehicle, according to embodiments as disclosed herein. A first vehicle can receive a predefined local map from at least one other vehicle, such that the predefined local map received from the at least one other vehicle can be stitched together with the first vehicle’s predefined local map to form a global map so as to obtain a holistic view. The first vehicle can also obtain information from the at least one other vehicle to modify its local map or route. For example, if the at least one other vehicle receives information that a certain area in the at least one other vehicle’s local map is closed off, the at least one other vehicle’s local map can be updated to reflect this new information, and the first vehicle can be made aware of this change.
[0074] FIG. 7 depicts the dynamic kinematics control module and how it can enable the system 100 to work with vehicles having various form factors, according to embodiments as disclosed herein. The dynamic kinematics control module can have 4WD differential control, tricycle kinematic control, ackerman steering control, differential control, drive and steer control etc. Depending on the kind of vehicle the system 100 is integrated with, the kinematics module will allow for a specific control that can allow the vehicle to be autonomously driven. For example, for a tricycle, the kinematics module can allow for tricycle kinematic control. For a four-wheeler, the kinematics module can allow for 4WD differential control.
[0075] FIG. 8 is an example diagram illustrating how the teleoperations (tele-ops/teleops) and autonomous systems interact with the vehicle, according to embodiments as disclosed herein;. The vehicle can be controlled either remotely by a user providing a command to the actuation layer 105 through the HUD 309, or when the processing module 103 generates a command that is directed to the actuation layer 105. The vehicle may also be remotely controlled with a joystick. The commands from the user can result in a high level control of the vehicle. The output from the vision module 205, which can be responsible for depth extraction from the feed from one or more cameras 101, can be directed to the intel module 203. The intel module 203, which can be responsible for the localization, mapping, mission planning, and control processes, can also receive input from the rule engine database. The rule engine database can store rules that can affect the mission planning process by having rules that can affect path planning and motion planning. Execution of the control process can generate a command that can result in high level control of the vehicle.
[0076] FIG. 9 depicts the state management of the vehicle, according to embodiments as disclosed herein. The vehicle can be in an idle state when no command has been issued to the vehicle. When a command has been issued to the vehicle, it can be in various states such as, but not limited to, a maintenance state, user state, autonomy state, or teleoperating state. The vehicle can be in the maintenance state when any maintenance for the vehicle is scheduled. Once the maintenance is complete, the vehicle can be ready for functioning and can go back to the idle state. The state of the vehicle can change from the idle state or the teleops state to the autonomy state when waypoints (a goal or mission) are provided to the vehicle. Once the mission is complete, the vehicle can go back to the idle state. The state of the vehicle can change from the autonomy state to the teleops state when the user or a teleoperator takes control of the vehicle through means such as, but not limited to, a joystick or commands issued to the HUD 309. The user state can be enabled for certain vehicles that do not accommodate a person. An example of such a vehicle is a lawn mower. In a user state, the vehicle can start a ride, and once the ride is complete, the vehicle goes back to the idle state.
[0077] FIG. 10 is an example block diagram of the system 100 when integrated with a vehicle, according to embodiments as disclosed herein. The processing module 103 can receive data from the vehicle peripheral devices (vehicle lights, batteries, actuators etc,), distance sensors (RADAR, SONAR, Cameras etc.), and feedback from the vehicle. The feedback from the vehicle can include movement of the vehicle in response to a command issued to the actuation layer 105. Based on the data received by the processing module 103, commands can be issued to the motor actuation for performance of the functions associated with the issued commands.
[0078] FIG. 11 is an example block diagram illustrating the power transfer within the system 100, according to embodiments as disclosed herein. The battery from the vehicle can pass through a high voltage DC to DC converter and a low voltage DC to DC converter. The actuator 105 (motor driver and motor) can receive the output from the high voltage DC to DC converter. The processors can receive the output from the low voltage DC to DC converter.
[0079] Embodiments herein enable the vehicle to be driven autonomously and/or by a remotely located operator/entity. In an embodiment herein, the vehicle can be driven autonomously based on inputs from the processing module 103. In an embodiment herein, the vehicle can be driven based on inputs from the remotely located operator/entity. In an embodiment herein, the vehicle can be driven autonomously based on inputs from the processing module103, with the remotely located operator/entity serving as a backup (wherein the remotely located operator/entity can provide inputs in cases such as, but not limited to, emergency scenarios, on detecting potentially hazardous scenarios, accidents, undetected obstacles/impediments, and so on). Sonars can be used for low range, depth from camera 101 for mid-range, and RADAR for long range.
[0080] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0081] The embodiment disclosed herein describes methods and systems for enabling autonomous driving for a vehicle. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in at least one embodiment through or together with a software program written in e.g., Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means which could be e.g., hardware means like e.g., an ASIC, or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the embodiments described herein may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[0082] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the embodiments as described herein.
,CLAIMS:STATEMENT OF CLAIMS
I/We claim:
1. A system (100) for controlling the autonomous movement of a vehicle, comprising:
a plurality of sensors, wherein the plurality of sensors includes a plurality of cameras (101); and
a processing module (103), wherein the processing module comprises at least one processor, wherein the at least one processor is configured to perform the following operations:
compressing a feed from the plurality of cameras (101), wherein the feed is compressed using a compression protocol;
extracting a depth from the feed from the plurality of cameras (101), wherein the depth is extracted from the feed using a depth estimation method;
localizing of the vehicle, wherein the process of localizing the vehicle is dependent on the data received by at least one of the following: a geo-location sensor, an inertial measurement unit of the vehicle, and a visual-inertial simultaneous localization and mapping process;
mapping of the vehicle, wherein the mapping of the vehicle results in the generation of a map of the vehicle environment, and wherein the mapping of the vehicle is dependent on the location of the vehicle;
planning a mission, wherein the planning of the mission involves execution of a mission based on an input from a user, and
executing a control process, wherein the execution of the control process is dependent on the location of the vehicle, the mapping of the vehicle, and a destination of the vehicle, and wherein the execution of the control process results in the generation of a command that is directed to an actuator (105) for physical movement of the vehicle.

2. The system (100) of claim 1, wherein a primary processor in the processing module (103) is configured to perform at least one of the following operations for the vehicle: compressing and optimizing the feed from the plurality of cameras, localizing the vehicle, and mapping of the vehicle.

3. The system (100) of claim 2, wherein a secondary processor in the processing module (103) is configured to perform at least one of the following functions: planning a mission and executing a control process.

4. The system (100) of claim 1, wherein the processing module (103) further comprises one or more triggers for the actuator (105).

5. The system (100) of claim 1, wherein the actuator (105) comprises a motor driver and a motor.

6. The system (100) of claim 1, wherein the cameras (101) are either monocular cameras or stereo cameras.

7. The system (100) of claim 1, wherein the depth estimation method is either monocular depth estimation or stereo depth estimation.

8. The system (100) of claim 1, further comprising a microcontroller, wherein the command generated by the execution of the control process is directed to the actuator (105) by the microcontroller.

9. The system (100) of claim 8, wherein the microcontroller is Arduino.

10. The system (100) of claim 1, wherein the operating system for the at least one processor is Linux Based.

11. A method for autonomously driving a vehicle, comprising:
receiving, by a processing module (103) in a system (100) integrated with the vehicle, data from at least one of: one or more cameras and one of more sensors in the vehicle;
compressing a feed from the one or more cameras (101), by the processing module (103), using a compression protocol;
extracting a depth from the feed, by the processing module (103), using a depth estimation method;
determining a location of the vehicle, by the processing module (103), based on data from at least one of the following: the one or more cameras, a geo-location sensor, and a visual-inertial simultaneous localization and mapping process;
mapping the vehicle, by the processing module (103), wherein mapping the vehicle results in the generation of a map of the vehicle environment, and wherein mapping the vehicle is dependent on the location of the vehicle;
executing a mission planning process, by the processing module (103), wherein the mission planning process involves creation of a mission based on an input from a user; and
generating a command upon execution a control process, by the processing module (103), wherein the execution of the control process is dependent on the location of the vehicle, the mapping of the vehicle, and a destination of the vehicle, and wherein the generated command is directed to an actuator (105) for physical movement of the vehicle.

12. The method of claim 11, wherein the feed from the one or more cameras (101) is resized prior to the extraction of the depth from the feed.

13. The method of claim 11, wherein the compression protocol for compressing the feed is web real-time communication protocol.

14. A method for detecting objects by an autonomously moving vehicle, comprising:
receiving, by a feed multiplexer and queue unit (301), of a feed from a plurality of cameras (101);
transmitting, by the feed multiplexer and queue unit (301), the feed from the plurality of cameras (101) to a compression unit (305), wherein the compression unit (305) compresses the feed from the plurality of cameras (101);
resizing, by a resizing unit (307), of the feed from the plurality of cameras (101); and
extracting of the depth from the feed from the plurality of cameras (101), by a depth extraction engine (311), wherein the depth extraction engine (311) uses a depth estimation method to extract the depth from the feed.

15. The method of claim 14, further comprising the step of converting the three-dimensional extracted depth to two-dimensional form by a three-dimensional to two-dimensional conversion unit (313).

16. The method of claim 15, wherein the conversion of the three-dimensional extracted depth to two-dimension form is done through laser scanning.

17. The method of claim 15, further comprising the step of directing the two-dimensional extracted depth to a head-up-display (309).

Dated this 13th January 2022

Signature
Name of the Signatory: Nitin Mohan Nair
Patent Agent-2585

Documents

Application Documents

# Name Date
1 202141002993-STATEMENT OF UNDERTAKING (FORM 3) [21-01-2021(online)].pdf 2021-01-21
2 202141002993-PROVISIONAL SPECIFICATION [21-01-2021(online)].pdf 2021-01-21
3 202141002993-OTHERS [21-01-2021(online)].pdf 2021-01-21
4 202141002993-FORM FOR STARTUP [21-01-2021(online)].pdf 2021-01-21
5 202141002993-FORM FOR STARTUP [21-01-2021(online)]-1.pdf 2021-01-21
6 202141002993-FORM FOR SMALL ENTITY(FORM-28) [21-01-2021(online)].pdf 2021-01-21
7 202141002993-FORM 1 [21-01-2021(online)].pdf 2021-01-21
8 202141002993-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-01-2021(online)].pdf 2021-01-21
9 202141002993-DRAWINGS [21-01-2021(online)].pdf 2021-01-21
10 202141002993-DECLARATION OF INVENTORSHIP (FORM 5) [21-01-2021(online)].pdf 2021-01-21
11 202141002993-Proof of Right [25-01-2021(online)].pdf 2021-01-25
12 202141002993-FORM-26 [25-01-2021(online)].pdf 2021-01-25
13 202141002993-Correspondence_Form1, Form30, Power of Attorney_01-02-2021.pdf 2021-02-01
14 202141002993-FORM 18 [13-01-2022(online)].pdf 2022-01-13
15 202141002993-DRAWING [13-01-2022(online)].pdf 2022-01-13
16 202141002993-COMPLETE SPECIFICATION [13-01-2022(online)].pdf 2022-01-13
17 202141002993-FER.pdf 2022-09-09
18 202141002993-RELEVANT DOCUMENTS [05-01-2023(online)].pdf 2023-01-05
19 202141002993-POA [05-01-2023(online)].pdf 2023-01-05
20 202141002993-FORM 13 [05-01-2023(online)].pdf 2023-01-05
21 202141002993-AMENDED DOCUMENTS [05-01-2023(online)].pdf 2023-01-05
22 202141002993-OTHERS [27-02-2023(online)].pdf 2023-02-27
23 202141002993-FORM 3 [27-02-2023(online)].pdf 2023-02-27
24 202141002993-FER_SER_REPLY [27-02-2023(online)].pdf 2023-02-27
25 202141002993-CLAIMS [27-02-2023(online)].pdf 2023-02-27
26 202141002993-ABSTRACT [27-02-2023(online)].pdf 2023-02-27
27 202141002993-PatentCertificate07-03-2024.pdf 2024-03-07
28 202141002993-IntimationOfGrant07-03-2024.pdf 2024-03-07

Search Strategy

1 sserE_09-09-2022.pdf

ERegister / Renewals

3rd: 15 May 2024

From 21/01/2023 - To 21/01/2024

4th: 15 May 2024

From 21/01/2024 - To 21/01/2025

5th: 10 Dec 2024

From 21/01/2025 - To 21/01/2026