Abstract: ABSTRACT User interaction systems in vehicles. Embodiments disclosed herein relate to vehicle systems and more particularly to interfaces, which enable user interactions with vehicle systems. Embodiments herein disclose methods and systems for enabling user interactions with a vehicle using at least one display. Embodiments herein disclose methods and systems for enabling user interactions with a vehicle using at least one display, wherein the system configures the display for displaying information in a plurality of depth layers using a 3 dimensional depth field. Embodiments herein disclose methods and systems for enabling user interactions with a vehicle using at least one display, wherein the user can interact with the depth layers using a minimal number of touches. Embodiments herein provide methods and systems for providing contextual information related to the driver and the vehicle in terms of spatial/location or temporal or behavioral using a 3 dimensional depth field. FIG. 6
Claims:STATEMENT OF CLAIMS
We claim:
1. A system (100) for providing information to a user of a vehicle using at least one display (102), the system (100) configured for
fetching information related to current location of the vehicle, time and vehicle conditions by a controller (101);
determining driver behaviour by the controller (101);
determining a subset of functions from a list of available functions by the controller (101) based on the fetched information and the determined driver behaviour;
displaying at least one item corresponding to the determined subset of functions using the display (102) by the controller (101) in a 3 dimensional manner, wherein the at least one item is displayed using a plurality of depth layers.
2. The system, as claimed in claim 1, wherein the controller (101) is configured to arrange positions and perspectives of the depth layers.
3. The system, as claimed in claim 1, wherein the depth layers comprise of a plurality of layers, wherein a first layer is closest to the user and displays a primary menu item and subsequent layers serve as placeholders for additional information.
4. The system, as claimed in claim 3, wherein the controller (101) is configured to enable at least one of a user or an authorized person to configure the depth layers.
5. A method for providing information to a user of a vehicle using at least one display (102), the method comprising
fetching information related to current location of the vehicle, time and vehicle conditions by a controller (101);
determining driver behaviour by the controller (101);
determining a subset of functions from a list of available functions by the controller (101) based on the fetched information and the determined driver behaviour;
displaying at least one item corresponding to the determined subset of functions using a display (102) by the controller (101) in a 3 dimensional manner, wherein the at least one item is displayed using a plurality of depth layers.
6. The method, as claimed in claim 5, wherein the controller (101) arranges positions and perspectives of the depth layers.
7. The method, as claimed in claim 5, wherein the depth layers comprise of a plurality of layers, wherein a first layer is closest to the user and displays a primary menu item and subsequent layers serve as placeholders for additional information.
8. The method, as claimed in claim 7, wherein at least one of a user or an authorized person can configure the depth layers.
, Description:This application is a Patent of Addition in furtherance of Indian patent application 201641033165.
TECHNICAL FIELD
[001] Embodiments disclosed herein relate to vehicle systems and more particularly to interfaces, which enable user interactions with vehicle systems.
BACKGROUND
[002] Currently, most vehicles are provided with systems comprising of a display, such as an infotainment system, HVAC (Heating, Ventilation and Air Conditioning) systems, instrument clusters, and so on. With the touch screen displays, the driver interaction is mainly through touch or through control buttons (for example, steering wheel controls). Considering an example of an infotainment system with a display, a typical home screen of the infotainment system will provide a highest level view of the functionalities; such as options for accessing media, phone, and navigation. However if the driver wishes to operate the infotainment system or view information on the system, the driver has to focus on the screen and maneuver through different screen and perform a series of touches to get to the relevant information for which he is looking. This demands complete driver attention towards the screen and forces him to take his eyes off the road. This can be distracting for the drive, hereby adversely affecting the safety of the vehicle and can result in accidents or dangerous situations.
OBJECTS
[003] The principal object of this invention is to provide methods and systems for enabling user interactions with a vehicle using at least one display.
[004] Another object of the invention is to provide methods and systems for enabling user interactions with a vehicle using at least one display, wherein the system configures the display for displaying information in a plurality of depth layers using a 3 dimensional depth field.
[005] A further object of the invention is to provide methods and systems for enabling user interactions with a vehicle using at least one display, wherein the user can interact with the depth layers using a minimal number of touches.
[006] A further object of the invention is to provide methods and systems for providing contextual information related to the driver and the vehicle in terms of spatial/location or temporal (time of travel) or behavioral (drive pattern of the driver) using a 3 dimensional depth field.
BRIEF DESCRIPTION OF FIGURES
[007] This invention is illustrated in the accompanying drawings, through out which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[008] FIGs. 1a and 1b depicts vehicle systems wherein a user of the vehicle can interact with the system, according to embodiments as disclosed herein;
[009] FIG. 2 depicts the controller, according to embodiments as disclosed herein;
[0010] FIGs. 3a and 3b depict example displays for an infotainment system, according to embodiments as disclosed herein;
[0011] FIG. 4 depicts another example of a display of an infotainment system, according to embodiments as disclosed herein;
[0012] FIG. 5 depict an example display for an infotainment system, according to embodiments as disclosed herein; and
[0013] FIG. 6 is a flowchart depicting the process of providing information to the user of the vehicle based on at least one context, according to embodiments as disclosed herein.
DETAILED DESCRIPTION
[0014] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0015] The embodiments herein disclose methods and systems for enabling user interactions with a vehicle using at least one display. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0016] The vehicle as disclosed herein can be at least one of a car, van, truck, bus, motorcycle, scooter, or any other vehicle comprising of at least one display based system. Examples of the system can be an infotainment system (wherein the infotainment system can depict at least one of a media menu, a phone menu, navigation, and so on), a HVAC (Heating, Ventilation and Air Conditioning) system, an instrument cluster or any other system present in the vehicle, which provides information to the user and/or enables the user to interact with the system.
[0017] User as disclosed herein can refer to any person present in the vehicle, such as the driver, at least one passenger, and so on.
[0018] FIGs. 1a and 1b depicts vehicle systems wherein a user of the vehicle can interact with the system. The system 100 as depicted can comprise of at least one controller 101 and at least one display 102. In an embodiment herein, the controller 101 can be separate from the display 102 and can be co-located or located remotely from the display 102 (as depicted in FIG. 1a). The controller 101 can be connected to more than one system and can control more than one system present in the vehicle. In an embodiment herein, the controller 101 can be present internal to the vehicle system (as depicted in FIG. 1b). In an embodiment herein, the system 100 can also comprise of at least one user interface. Examples of the user interface can be touchscreens, physical buttons, steering wheel buttons, a joystick rotary controller, interfaces present in the rear of the vehicle so to enable passengers seated in the rear to interact with the vehicle system 100, gesture based means (which enables the system 100 to sense gestures made by at least one user), and so on.
[0019] FIG. 2 depicts the controller. The controller 101, as depicted, comprises of a rendering engine 201, an object controller 202, at least one communication interface 203, and a memory 204. The at least one communication interface can enable the system 100 to interact with at least one system present in the vehicle. The at least one communication interface 203 can enable a user to interact with the system using at least one of the display 102, physical buttons, steering wheel buttons, a joystick rotary controller, interfaces present in the rear of the vehicle so to enable passengers seated in the rear to interact with the vehicle system 100, gesture based means, and so on. The memory 204 can be at least one of a co-located memory or a remotely located memory.
[0020] The object controller 202 can display a plurality of menu items to the user on the display 102, wherein a primary menu item is displayed as the nearest object on the closest layer to the user and subsequent layers display further information related to the primary menu item or subsequent items. For example, the display 102 can display the current media item being played in the first layer, with the subsequent layers showing the other media items in the queue. The manner of displaying items as a plurality of layers on the display 102 in a 3D (3 dimensional) manner, wherein the first layer is the layer closest to the front which displays the primary menu item and subsequent layers are present behind the first layer and serve as placeholders for additional information, has been referred to herein as depth layers. The object controller 202 can enable at least one user or an authorized person to customize the depth layers, on a global basis or on a per item basis, such as the number of depth layers to be displayed, the items to be displayed, and so on. FIGs. 3a and 3b depict example displays for an infotainment system. In FIG. 3a, the top panel displays generic information such as the date, time, and so on and the display panel in the center displays the media being played currently, information related to a connected mobile device, information related to the vehicle and navigation information. In FIG. 3b, the user has selected the media item and further information about the media is displayed in a prominent manner. Additional panels can display additional information such as information related to a connected mobile device, information related to the vehicle and navigation information.
[0021] The object controller 202 can use the rendering engine 201 to arrange the depth layers in a 3 dimensional (3D) manner, so as to provide the user with a feeling that the display is 3 dimensional (3D), by arranging the position and perspective sizes of the depth layers. FIG. 4 depicts another example of a display of an infotainment system.
[0022] The object controller 202 can create and represent at least one virtual object in the display 102, wherein a virtual object can aid in facilitating the transition from 2D mechanisms to 3D interactions. In an example herein, the virtual object can be a virtual floating sphere graphically drawn on the display 102, which can be used to represent the users hand inside this 3D environment. In an embodiment herein, the user can interact with the object using the display 102 wherein the display 1-2 is a touch based display, using at least one of basic touch gesture(s), multi touch gestures (such as swiping, pinching, and so on). In an embodiment herein, the user can interact with the object using a rotary controller or a joystick. The user can perform actions such as clockwise/anticlockwise rotations, pushing down, pulling up, 360 degrees push and tilt, and so on. In an embodiment herein, the user can interact with the object using controls present on the steering wheel. In an embodiment herein, the user can interact with the object using controls present in the instrument cluster. In an embodiment herein, the user can interact with the object using gestures performed in a pre-defined region within the vehicle. The interface 203 can comprises of means such as camera based means, depth sensing means to identify the gesture made by the user. The interface 203 can locate the position of the users hand and then translate that from a fixed point of reference, into the distance that the users hand has travelled into the 3D depth. This space can be used to represent and interact with the items on the display 102. Additional feedback mechanisms can be thought of for providing the driver with proper feedback to minimize localization errors.
[0023] The object controller 202 can be in communication with entities such as the vehicle systems (such as but not limited to the ECU (Engine Control Unit), the speed sensor, system clock, vehicle functioning monitoring and/or storage systems, vehicle conditions monitoring and/or storage systems, and so on), a location system (such as a positioning means (such as GPS (Global Positioning System)), a mapping application, a navigation means), and so on.
[0024] The object controller 202 can fetch information related to the current location of the vehicle, such as terrain, road conditions at that location, traffic and so on from the entities, using the communication interface 203. The object controller 202 can also access information such as the current time, as certain areas are more sensitive during particular time of day. For example, the school zone area needs to be considered on working days specifically at school opening, and school closing times. The object controller 202 can also receive information related to the capabilities and conditions of the vehicle. These can include but not be limited to availability of ABS (Anti-lock Braking System), traction controls, airbags, ground clearance, suspension tolerance or dynamic parameters such as vehicle load, tyre pressure, wear and tear of the vehicle parts (brake fluids, coolant, engine temperature), and so on. This information can be fetched from the memory 204 or any other storage location.
[0025] The object controller 202 can determine the driver behavior. The object controller 202 can characterize the driver behavior by factors such as but not limited to acceleration, hard braking, appropriate gear selection for driving, and so on. The object controller 202 can use any suitable method for determining driver behaviour. In an embodiment herein, the object controller 202 can provide information to an external entity, wherein the external entity can determine the driver behavior and can communicate the determined driver behavior to the object controller 202. The object controller 202 can fetch a list of available functions in the vehicle from a suitable location such as the memory 204. The object controller 202 can use a look up table (which can be present in a suitable location such as the memory 204) to determine context sensitive subset of functions to be either disabled/enabled based on the available functions and the determined/fetched information. The look up table can comprise of at least one pre-defined condition and predictions. An example look up table is depicted in table 1.
Vehicle function Driver behavior Navigation, POI, traffic Output configuration
Vehicle load >500KG & cubic capacity 900CC Change from 3rd to 1st 2-5 Seconds Increase in altitude/inclined terrain, heavy traffic in 50-100 meters Navigation Grid: Priority 1? Gear down to 2
Shift the current priorities stacked behind in 3D
Table 1
[0026] The object controller 202 can generate the display configuration. Examples of the context sensitive functions can comprise of music being played, notifications from mobile devices connected to the vehicle (such as SMS (Short Messaging Service), unread mails/messages, call logs, and so on), vehicle information (fuel level, distance to empty, current speed, gear up/down suggestions, rear/front cameras, air conditioning, and so on), navigation and POI (map, navigation system, POI (Point of Interest), terrain information, live traffic conditions, and so on), and so on. Further examples related to the vehicle functions and capabilities can comprise of at least one of determination of tyre pressure, determination of vehicle load, engine power (cubic capacity, transmission power, shift time and pick up time with gears, and so on). Further examples related to the driver behavior detection system can comprise of determination of the response time from driver on given gears, determination of the braking time, determination of the capabilities of driver in driving in uphill/inclinations with/without load, and so on. The object controller 202 can provide the determined context sensitive functions and the generated display configuration to the rendering engine 201. The rendering engine 202 can render the information and display the rendered information using the display 102. An example display is depicted in FIG. 5, wherein the display 102 provides an indication to the driver to shift to a lower gear.
[0027] FIG. 6 is a flowchart depicting the process of providing information to the user of the vehicle based on at least one context. The object controller 202 fetches (601) information from at least one entity. The fetched information can relate to the current location of the vehicle, such as terrain, road conditions at that location, traffic and so on from the entities, using the communication interface 203. The fetched information can comprise of the current time. The fetched information can comprise of the capabilities and conditions of the vehicle. The object controller 202 determines (602) the driver behavior. The object controller 202 can characterize the driver behavior by factors such as but not limited to acceleration, hard braking, appropriate gear selection for driving, and so on. The object controller 202 fetches (603) the list of available functions in the vehicle. The object controller 202 determines (604) a context sensitive subset of functions to be either disabled/enabled based on the available functions and the determined/fetched information using a look up table. The object controller 202 generates (605) the display configuration. The object controller 202 provides (606) the determined context sensitive functions and the generated display configuration to the rendering engine 201. The rendering engine 202 renders (607) the information and displays (608) the rendered information using the display 102. The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.
[0028] The above examples herein are explained herein using infotainments systems merely as an example; however it can be obvious to a person of ordinary skill in the art to extend embodiments as disclosed herein to any display based system present in a vehicle.
[0029] Embodiments herein disclose a more ergonomic way of information representation, which will give the user the basic and relevant information by just glancing at the screen and by enabling mechanisms to interact with the depth information layers using minimal number of touches. Embodiments disclosed herein alleviate the demand for driver attention by creating and using a 3D depth field made available in the display 102 to show additional information in these depth layers. Embodiments disclosed herein enable users to interact with these depth layers reducing the number of interactions required to access the information as compared to the currently deployed display based solutions.
[0030] The embodiment disclosed herein describes methods and systems for enabling user interactions with a vehicle using at least one display. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0031] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
| # | Name | Date |
|---|---|---|
| 1 | Form5_As Filed_30-12-2016.pdf | 2016-12-30 |
| 2 | Form3_As Filed_30-12-2016.pdf | 2016-12-30 |
| 3 | Form26_Power Of Attorney_30-12-2016.pdf | 2016-12-30 |
| 4 | Form2 Title Page_Complete_30-12-2016.pdf | 2016-12-30 |
| 5 | Form18_As Filed_30-12-2016.pdf | 2016-12-30 |
| 6 | Drawings_As Filed_30-12-2016.pdf | 2016-12-30 |
| 7 | Description Complete_As Filed_30-12-2016.pdf | 2016-12-30 |
| 8 | Claims_As Filed_30-12-2016.pdf | 2016-12-30 |
| 9 | Abstract_As Filed_30-12-2016.pdf | 2016-12-30 |
| 10 | abstract 201643045140.jpg | 2017-01-13 |
| 11 | Other Patent Document [15-05-2017(online)].pdf | 2017-05-15 |
| 12 | Correspondence by Agent_Form1,Form5,Power of Attorney_17-05-2017.pdf | 2017-05-17 |
| 13 | 201643045140-FER.pdf | 2021-10-17 |
| 1 | search045140E_05-11-2020.pdf |