Sign In to Follow Application
View All Documents & Correspondence

Autonomous Driving Control System For A Vehicle

Abstract: System (100) and method (400) are provided for controlling one or more operations of a vehicle. The system (100) includes a control circuitry (105), a first set of sensors (101), and a second set of sensors (103), where the control circuitry (105) receives a plurality of images and a set of attributes from a first (101) and the second (103) sets of sensors, respectively. The set of attributes pertains to at least a distance between the vehicle and the one or more objects during a current time window. The control circuitry (105) determines a future frame of one or more objects indicating future location of the one or more objects for a future time window. Based on the determination of the future frame of the one or more objects, the control circuitry (105) predicts a set of parameters for the future time window according to a prediction learning model.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 July 2022
Publication Number
04/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

MUST Research Labs LLP
Flat 109, Block 4, My Home Krishe, Hyderabad - 500046, Telangana, India.

Inventors

1. MUSTAFI, Joy
Flat 301, Block 4, My Home Vihanga, Hyderabad - 500046, Telangana, India.
2. GHOSH, Sagnik
My Home Vihanga, Block 11, Flat 406, Financial District, Hyderabad - 500046, Telangana, India.
3. MITTAL, Swayam
101, Govind Tower, Harihar Singh Road, Bariatu, Ranchi, Jharkhand - 834009, India.
4. BERA, Santanu
Duke Residency, Rohini - 4C, 13 Chanditala Lane, Kolkata - 700040, West Bengal, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates to an autonomous vehicle. More specifically, the present disclosure relates to an autonomous driving control system for reducing the risk and enhancing safety while driving based on multi-model input information.

BACKGROUND
[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Autonomous vehicles are capable to navigate through complex traffic situations on the road. The autonomous vehicles include a driving control system to detect surroundings using a variety of techniques such as Light Detection and Ranging (LIDAR), computer vision, radar, odometry, and Global Positioning System (GPS). The driving control system also interprets sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
[0004] As automation increases, the driving control systems face correspondingly tougher challenges and must therefore be able to react appropriately to a variety of situations. Automated vehicles must adhere to existing traffic laws, although following traffic laws alone may not be sufficient for comfortable driving expected by passengers.
[0005] Further, the existing driving control system does not take into account uneven routes with full of barriers and potholes. Moreover, the conventional driving control system would not be able to identify the traffic signs and thus fails to follow the traffic regulations.
[0006] There is, therefore, a need in the art for an autonomous driving control system that can overcome at least the above-mentioned challenges in the art.

OBJECTS OF THE PRESENT DISCLOSURE
[0007] It is an object of the present disclosure to provide an automated driving control system to move the vehicle while avoiding random obstacles in the path with higher stability as compared to the conventional systems.
[0008] It is an object of the present disclosure to provide an automated driving control system that allows the vehicle to drive in uneven road conditions and during traffic congestion.
[0009] It is an object of the present disclosure to provide an automated driving control system to facilitate effective route planning.

SUMMARY
[0010] The present disclosure relates to an autonomous vehicle. More specifically, the present disclosure relates to an autonomous driving control system for reducing the risk and enhancing safety while driving based on multi-model input information.
[0011] The present disclosure provides an automated driving control system that allows the vehicle to move at the destination location while avoiding random obstacles in the path with higher stability as compared to conventional systems. The automated driving control system includes a plurality of sensors configured to receive multi-model parameters. The multi-model parameters received from the plurality of sensors are then integrated, which is then provided to a prediction learning model, where the prediction learning model is a neural network-based deep learning model. The prediction learning model is trained through experiential learning through reinforcement. In particular, in the prediction learning model, particular weight is assigned to each of the multi-model parameters, where such weights are updated based on the training of the model. In each instance when the automated driving control system predicts one or more parameters for the movement of the vehicle from the current location/position to the destination location/position, the weights of the multi-model parameters are updated based on how accurate the prediction was. In this manner, the prediction learning model is trained to accurately predict the parameters, thereby ensuring the safety of the passengers.
[0012] An aspect of the present disclosure relates to an autonomous driving control system for a vehicle. The system includes a first set of sensors and a second set of sensors, configured with the vehicle. The first set of sensors captures a plurality of images associated with one or more objects within a distance from the vehicle during a current time window. The second set of sensors senses a set of attributes pertaining to at least a distance between the vehicle and one or more objects during a current time window. The system further includes a control circuitry communicatively coupled with the first set of sensors and the second set of sensors. The control circuitry includes a processor coupled to memory, the memory storing one or more executable instructions, which, when executed by the one or more processors causes the system to: receive the plurality of images and the set of attributes from the first set of sensors and the second set of sensors, respectively; determine a future frame of one or more objects indicating the future location of the one or more objects for a future time window; and based on the determination of the future frame of the one or more objects, predict a set of control parameters for the future time window to move the vehicle to a destination location, the prediction is performed according to a prediction learning model, wherein the prediction learning model indicates a weighted combination of the plurality of images and the set of attributes.
[0013] In an embodiment, the processor may be configured to predict a trajectory for the vehicle to move from a current location to a destination location.
[0014] In an embodiment, the prediction learning model is trained by: comparing the predicted set of parameters with corresponding predefined values to determine an error signal; and adjusting weights associated with at least one of the plurality of images and the set of attributes according to the error signal, to minimize the error between the predicted set of parameters and corresponding predefined values.
[0015] In an embodiment, the set of control parameters may include any or a combination of steering angle, amount of acceleration or deceleration associated with the vehicle, and wherein the steering angle is predicted by determining angles of lane lines, the curvature of a road, sign boards, turning angle of other vehicles and pedestrians.
[0016] In an embodiment, the first set of sensors comprises an image capturing device to capture a plurality of images to detect one or more objects present around the vehicle.
[0017] In an embodiment, the set of attributes pertains to a sound signal produced within a predefined distance from the vehicle, location and position of the vehicle, and wherein the second set of sensors comprises any or a combination of microphone, ultrasonic sensor, and GPS sensor.
[0018] In an embodiment, the system comprises an alarming unit configured to generate an alarm sound when at least one of the set of attributes exceeds a corresponding threshold value.
[0019] In an embodiment, wherein the processor is configured to compare the plurality of images of the one or more objects with a prestored set of images to identify the one or more objects and to categorize each of the plurality of images into a set of categories.
[0020] An aspect of the present disclosure relates to a method for controlling one or more operations of a vehicle, where the method is performed by the processor. The method includes receiving a plurality of images and a set of attributes from a first set of sensors and the second set of sensors, respectively; and determining a future frame of one or more objects indicating the future location of the one or more objects for a future time window. The plurality of images is associated with one or more objects within a distance from the vehicle during a current time window. The set of attributes pertains to at least a distance between the vehicle and the one or more objects during a current time window. The method further comprises, based on the determination of the future frame of the one or more objects, predicting a set of parameters for the future time window to move the vehicle to a destination location. The prediction is performed according to a prediction learning model, where the prediction learning model indicates a weighted combination of the plurality of images and the set of attributes.

BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0022] The diagrams are for illustration only, which thus is not a limitation of the present disclosure, and wherein:
[0023] FIG. 1 illustrates exemplary functional components of an automated driving control system in accordance with an embodiment of the present disclosure.
[0024] FIG. 2 illustrates exemplary representations of network architecture of the automated driving control system, in accordance with an embodiment of the present disclosure.
[0025] FIG. 3 illustrates an exemplary pictorial representation of sequence of operations of the automated driving control system, in accordance with an embodiment of the present disclosure.
[0026] FIG. 4 illustrates a flow diagram illustrating a method for controlling one or more operations of a vehicle, in accordance with an embodiment of the present disclosure.
[0027] FIG. 5 illustrates an exemplary computer system to implement the proposed system in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION
[0028] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0029] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, firmware and/or by human operators.
[0030] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
[0031] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
[0032] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0033] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0034] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this invention will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
[0035] While embodiments of the present invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the invention, as described in the claim.
[0036] The present disclosure provides an automated driving control system that allows the vehicle to move at the destination location while avoiding random obstacles in the path with higher stability as compared to conventional systems. The automated driving control system includes a plurality of sensors configured to receive multi-model parameters. The multi-model parameters received from the plurality of sensors are then integrated, which is then provided to a prediction learning model, where the prediction learning model is a neural network-based deep learning model. The prediction learning model is trained through experiential learning through reinforcement. In particular, in the prediction learning model, particular weight is assigned to each of the multi-model parameters, where such weights are updated based on the training of the model. In each instance when the automated driving control system predicts one or more parameters for movement of the vehicle from the current location/position to the destination location/position, the weights of the multi-model parameters are updated based on how accurate the prediction was.
[0037] In other words, the prediction learning model is based on computer vision and deep learning to compress the results of each input into an array of fixed length and use the generated array as input to the deep reinforcement-based algorithm for experiential learning. The algorithms for steering, acceleration and brake are simultaneously executed in different computation graphs, to predict all driving control values for each frame input signal. In this manner, a hybrid approach is used to predict the control based on multi-model input according to the prediction learning model and to train the prediction learning model to predict the parameters accurately, thereby ensuring the safety of the passengers.
[0038] FIG. 1 illustrates exemplary functional components of an automated driving control system 100 in accordance with an embodiment of the present disclosure.
[0039] In an aspect, the automated driving control system 100 (also referred to as the system 100, hereinafter) is configured with a vehicle to control operations of the vehicle based on multi-model parameters received from the one or more sensors. The one or more sensors may include a first set of sensors 101 and a second set of sensors 103, configured over one or more body parts of the vehicle. The first set of sensors 101 may be configured to capture a plurality of images associated with one or more objects present within a distance from the vehicle during a current time window. The first set of sensors 101 may include an image capturing device to capture a plurality of images to detect one or more objects present around the vehicle. Based on the captured plurality of images, one or more objects such as a traffic signal, a person crossing the road, tree, animals, traffic signals and so on are detected around the vehicle. For example, the captured images are compared with the pre-stored images and once the captured image matches one of the pre-stored images, the object in the image is identified.
[0040] In another embodiment, the second set of sensors 103 may be configured to sense a set of attributes pertaining to at least a distance between the vehicle and the one or more objects during a current time window. In addition, the set of attributes may also pertain to a sound signal produced within a predefined distance from the vehicle, location and position of the vehicle. The second set of sensors 103 may include any or a combination of microphone, ultrasonic sensor, and GPS sensor.
[0041] In an embodiment, the system 100 can be implemented using any or a combination of hardware components and software components such as a cloud, a server, a computing system, a computing device, a network device and the like. Further, the system 100 can interact with any computing device through a website or an application that can reside in the computing device. In an implementation, the system 100 can be accessed by a website or application that can be configured with any operating system, including but not limited to, AndroidTM, iOSTM, and the like. Examples of the computing devices can include, but are not limited to, a computing device associated with industrial equipment or an industrial equipment based asset, a smart camera, a smart phone, a portable computer, a personal digital assistant, a handheld device and the like.
[0042] In an aspect, the system 100 may include a control circuitry 105 that includes one or more processor(s) 102. The one or more processor(s) 102 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 102 are configured to fetch and execute computer-readable instructions stored in a memory 104 of the system 100. The memory 104 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 104 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0043] The system 100 may also comprise an interface(s) 106. The interface(s) 106 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 106 may facilitate communication of the system 100 with various devices coupled to the system 100 such as an input unit and an output unit. The interface(s) 106 may also provide a communication pathway for one or more components of the system 100. Examples of such components include, but are not limited to, processing engine(s) 108 and database 110.
[0044] The processing engine(s) 108 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 108. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 108may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 108 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 108. In such examples, the system 100 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 100 and the processing resource. In other examples, the processing engine(s) 108may be implemented by electronic circuitry.
[0045] The database 110 may comprise data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 108.
[0046] In an exemplary embodiment, the processing engine(s) 108 may comprise determining 112, a predicting unit 114, and other units(s) 116.
[0047] It would be appreciated that units being described are only exemplary units and any other unit or sub-unit may be included as part of the system 100. These units too may be merged or divided into super- units or sub-units as may be configured.
Determining unit 112
[0048] In an embodiment, a determining unit 112 may receive the plurality of images and the set of attributes from the first set of sensors and the second set of sensors, respectively. The determining unit 112 may compare the plurality of images of the one or more object with a prestored set of images to identify the one or more object and to categorize each of the plurality of images into a set of categories as shown in FIG. 3. For example, if the images are identified as traffic signs, the corresponding image may be classified in the category of traffic signs, where the relevant information corresponding to the traffic is retrieved and send it the predicting unit 114.
[0049] In another embodiment, the determining unit 112 may also receive the set of attributes from the second set of sensors. Based on the received set of attributes, the determining unit 112 may, for example, determine the distance and depth of the object. The determining unit 112 may also recognize the sound received from the second set of sensors e.g. microphone. For example, the second set of sensors may also include a GPS sensor to determine the location of the vehicle.
[0050] In an embodiment, based on information retrieved from the received plurality of images and the set of attributes, the determining unit 112 may determine a future frame of one or more objects indicating the future location of the one or more objects for a future time window. In other words, based on the location, position, and velocity of the object, the determining unit 112 may determine the future location of the one or more objects. Such determination of future location can be the basis to determine a trajectory for the vehicle to move from the current location to the destination location.
Predicting unit 114
[0051] In an embodiment, in response to the determination of the future frame of one or more objects, the predicting unit 114 can predict a trajectory for the vehicle to move from the current location to a destination location. The trajectory may be predicted based on the determination of a future frame of one or more objects. For example, the trajectory is predicted such that the one or more objects would not come in the path of the vehicle. Thus, the determination of the future frame of the one or more objects would reduce the risk of accidents.
[0052] In another embodiment, the predicting unit 114 may further predict a set of control parameters for the future time window to move the vehicle to a destination location, based on the determination of the future frame of the one or more objects. The one or more parameters are predicted based on at least one of deep learning, Hough Transformation, a computer vision-based technique.
[0053] In some embodiments, the prediction of the set of control parameters and the trajectory may be performed according to a prediction learning model. The prediction learning model indicates a weighted combination of the plurality of images and the set of attributes, where the plurality of images and the set of attributes may also be known as multi-model parameters.
[0054] In an embodiment, the set of control parameters may include any or a combination of steering angle, amount of acceleration or deceleration associated with the vehicle. The steering angle may be predicted by determining angles of lane lines, the curvature of a road, sign boards, and turning angle of other vehicles and pedestrians.
[0055] In an embodiment, according to the prediction learning model, each parameter of the plurality of images and the set of attributes may be assigned a weight, which can be adjusted to train the prediction learning model. In particular, the predicting unit may compare the predicted set of parameters with corresponding predefined values to determine an error signal. The predicting unit 114 may then adjust weights associated with at least one of the plurality of images and the set of attributes according to the error signal, to minimize the error between the predicted set of parameters and corresponding predefined values This process of adjusting the weight may be known as back propagation weight adjustment.
[0056] In an embodiment, the system 100 may include an alarming unit 111 configured to generate an alarm sound when at least one of the set of attributes exceeds a corresponding threshold value. For example, when an animal suddenly comes in front of the vehicle, the determining unit may send the signal to the predicting unit 114 to determine whether an alarm signal should be generated or not. The predicting unit 114 may then send the signal to the alarming unit 111 e.g. a speaker and so on, to generate an alarming sound to alert the driver. The alarming sound can be generated to alert the other vehicles.
[0057] FIG. 2 illustrates exemplary representations of network architecture of the automated driving control system, in accordance with an embodiment of the present disclosure.
[0058] As illustrated in FIG. 2, the first set of sensors 101 may include an image capturing device such as camera 101 that may capture the images and based on the captured images, obstacle detection unit 211 may detect the one or more objects such as signboard, traffic signals and so on. The capture images may also be sent to segmentation unit 212 to identify the driving lane for the vehicle to move.
[0059] In an embodiment, the second set of sensors 103 may include a depth sensor 103-1, a microphone 103-2, an ultrasonic sensor 103-3, and a GPS sensor 103-4. The depth sensor 103-1 may detect the distance and depth of the moving object, whereas the microphone 103-2 may detect the sound signal propagated near the vehicle. The sound signal may be sent to an audio and speech recognition unit 213 to recognize the sound. Similarly, the signal from the ultrasonic sensor 103-3 may be sent to measurement unit 214 to determine the distance between the vehicle and the closely located object. In addition, the GPS sensor 103-4 may sense the longitude and latitude of the vehicle. The data received from the units or directly from the sensors may be sent to the integration unit 215 to integrate all the features (also referred to as multi-model features).
[0060] The obstacle detection unit 211, segmentation unit 112, speech recognition unit 213, and measurement unit 214, can be parts of the determining unit 112.
[0061] In an embodiment, the predicting unit 114, based on the integrated multi-model features, may employ a prediction learning model 216 such as a neural networking-based deep learning model. According to the model, the control parameters are predicted. The system may also include On-Board Devices (OBD) 217 to determine the current parameters such as – steering angle, brake, and acceleration, and so on. The current parameters may indicate human decision such as the movement of the steering angle by the driver, amount of brake applied by the driver and so on. Based on the current parameters and predicted parameters, an error signal 218 is generated, which is used for backpropagation weight adjustment 219. In other words, the error signal indicates the error between the predicted parameters and real/current parameters corresponding to the human decision. Further, based on the predicted parameters, an alarm signal 220 may be generated. The back propagation weight adjustment technique 219 may determine the adjustment to be done to the weights of at least one of the multi-model parameters in the predicting learning model. In this manner, the model is trained to provide accurate prediction, which is also referred to as experiential learning through reinforcement 221.
[0062] FIG. 3 illustrates an exemplary pictorial representation of sequence of operations of the automated driving control system, in accordance with an embodiment of the present disclosure. As illustrated in FIG. 3, object detection, semantic segmentation, and object classification are visualized. For example, based on the captured images, the objects are classified into a predefined set of categories such as a category of person, traffic lights or sign boards.
[0063] As illustrated in FIG. 3, a computer vision based technique and optical flow are used to identify the motion of objects and edges in a scene to estimate future frame flow for the acceleration estimation on the fly in real-time coupled with CNN-based architecture. In an embodiment, the microphone feed leverages neural network based vernacular speech recognition to alarm the driver with action required. In another embodiment, the live map feed facilitates recognition of the area traffic regulation for speed control.
[0064] FIG. 4 illustrates a flow diagram 400 illustrating a method for controlling one or more operations of a vehicle, in accordance with an embodiment of the present disclosure.
[0065] In an embodiment, at block 402, a plurality of images and a set of attributes may be received from a first set of sensors and the second set of sensors, respectively. The plurality of images may be associated with one or more objects within a distance from the vehicle during a current time window. The set of attributes may pertain to at least a distance between the vehicle and the one or more objects during a current time window.
[0066] At block 404, a future frame of one or more objects indicating the future location of the one or more objects may be determined for a future time window. Further, at block 406, based on the determination of the future frame of the one or more objects, a set of parameters may be predicted for the future time window to move the vehicle to a destination location, where the prediction is performed according to a prediction learning model. The prediction learning model indicates a weighted combination of the plurality of images and the set of attributes.
[0067] FIG. 5 illustrates an exemplary computer system 500 to implement the proposed system in accordance with embodiments of the present disclosure.
[0068] As shown in FIG. 5, a computer system can include an external storage device 510, a bus 520, a main memory 530, a read only memory 540, a mass storage device 550, communication port 560, and a processor 570. A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor 570 include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor 570 may include various modules associated with embodiments of the present invention. Communication port 560 can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 560 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
[0069] Memory 530 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory 540 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 570. Mass storage 550 may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[0070] Bus 520 communicatively couples processor(s) 570 with the other memory, storage and communication blocks. Bus 520 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 570 to software system.
[0071] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus 520 to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 560. External storage device 510 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[0072] Embodiments of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product comprising one or more computer readable media having computer readable program code embodied thereon.
[0073] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular n.
[0074] As used herein, and unless the context dictates otherwise, the term "coupled to" is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[0075] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refer to at least one of something selected from the group consisting of A, B, C …. and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[0076] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE INVENTION
[0077] The present disclosure provides an automated driving control system to move the vehicle while avoiding random obstacles in the path with higher stability as compared to conventional systems.

[0078] The present disclosure provides an automated driving control system that allows the vehicle to automatic drive in uneven road conditions and during traffic congestion.
[0079] The present disclosure provides an automated driving control system to facilitate effective route planning.


, Claims:1. An autonomous driving control system (100) for a vehicle, the system (100) comprising:
a first set of sensors (101) configured with the vehicle to capture a plurality of images associated with one or more objects present within a distance from the vehicle during a current time window;
a second set of sensors (103) configured with the vehicle to sense a set of attributes pertaining to at least a distance between the vehicle and the one or more objects during a current time window; and
a control circuitry (105) communicatively coupled with the first set of sensors (101) and the second set of sensors (103), the control circuitry (105) comprising a processor (102) coupled to memory (104), the memory (104) storing one or more executable instructions which, when executed by the one or more processors (102), cause the system (100) to:
receive the plurality of images and the set of attributes from the first set of sensors (101) and the second set of sensors (103), respectively;
determine a future frame of one or more objects indicating future location of the one or more objects for a future time window; and
based on the determination of the future frame of the one or more objects, predict a set of control parameters for the future time window to move the vehicle to a destination location, the prediction being performed according to a prediction learning model (216), wherein the prediction learning model (216) indicates a weighted combination of the plurality of images and the set of attributes.

2. The system (100) as claimed in claim 1, wherein the processor (102) is configured to predict a trajectory for the vehicle to move from a current location to a destination location.
3. The system (100) as claimed in claim 1, wherein the prediction learning model (216) is trained by:
comparing the predicted set of parameters with corresponding predefined values to determine an error signal; and
adjusting weights associated with at least one of the plurality of images and the set of attributes according to error signal, so as to minimize error between the predicted set of parameters and corresponding predefined values.
4. The system (100) as claimed in claim 1, wherein the set of control parameters comprises any or a combination of steering angle, amount of acceleration or deceleration associated with the vehicle, and wherein the steering angle is predicted by determining angles of lane lines, curvature of a road, sign boards, turning angle of other vehicles and pedestrians.
5. The system (100) as claimed in claim 1, wherein the first set of sensors (101)comprises an image capturing device to capture a plurality of images to detect the one or more objects present around the vehicle.
6. The system (100) as claimed in claim 1, wherein the set of attributes pertain to a sound signal produced within a predefined distance from the vehicle, location and position of the vehicle, and wherein the second set of sensors (103)comprises any or a combination of microphone, ultrasonic sensor, and GPS sensor.
7. The system (100) as claimed in claim 1, wherein the system (100) comprises an alarming unit configured to generate an alarm sound when at least one of the set of attributes exceeds a corresponding threshold value.
8. The system (100) as claimed in claim 1, wherein the processor (102) is configured to compare the plurality of images of the one or more object with a prestored set of images to identify the one or more object and to categorize each of the plurality of images into a set of categories.
9. A method (400) for controlling one or more operations of a vehicle, the method (400) comprising:
receiving (402), by a processor (102), a plurality of images and a set of attributes from a first set of sensors (101)and the second set of sensors (103), respectively, wherein the plurality of images being associated with one or more objects present within a distance from the vehicle during a current time window, and wherein the set of attributes pertains to at least a distance between the vehicle and the one or more objects during a current time window;
determining (404), by the processor (102), a future frame of one or more objects indicating future location of the one or more objects for a future time window; and
based on the determination of the future frame of the one or more objects, predicting (406), by the processor (102), a set of parameters for the future time window to move the vehicle to a destination location, the prediction being performed according to a prediction learning model, wherein the prediction learning model indicates a weighted combination of the plurality of images and the set of attributes.

Documents

Application Documents

# Name Date
1 202241042284-STATEMENT OF UNDERTAKING (FORM 3) [23-07-2022(online)].pdf 2022-07-23
2 202241042284-POWER OF AUTHORITY [23-07-2022(online)].pdf 2022-07-23
3 202241042284-FORM 1 [23-07-2022(online)].pdf 2022-07-23
4 202241042284-DRAWINGS [23-07-2022(online)].pdf 2022-07-23
5 202241042284-DECLARATION OF INVENTORSHIP (FORM 5) [23-07-2022(online)].pdf 2022-07-23
6 202241042284-COMPLETE SPECIFICATION [23-07-2022(online)].pdf 2022-07-23
7 202241042284-Proof of Right [28-07-2022(online)].pdf 2022-07-28
8 202241042284-ENDORSEMENT BY INVENTORS [02-08-2022(online)].pdf 2022-08-02