Abstract: The present subject matter relates to a method and system for automatically adjusting vehicle mirrors and head-up display (HUD) based on eye position of a user (101). The method includes detecting (601) vehicle ignition ON condition and detecting (603) an eye position of a user (101) continuously sitting on a driver seat of a vehicle at a predetermined time interval and determining (604) 'Ye' and 'Ze' parameters of the eye position based on the detected eye positions. The method further includes comparing (605) the determined ' Ye' and 'Ze' parameters with the stored ' Ye' and 'Ze' parameter values and updating the parameter storing table (400). Further, the method includes determining (606) position estimation values of the ORVM, IRVM, and HUD based on 'Ye' and 'Ze' parameter corresponding to highest value of weight and adjusting (607) the ORVM (102), the IRVM (106), and the HUD (104) based on the determined position values.
The present subject matter described herein relates to an automated driver assist system for vehicles. In particular, the present subject matter relates to a method and a system for automatically adjusting mirrors and head-up display of a vehicle based on a driver's eye position.
BACKGROUND
[0002] Generally, vehicles have outside rear-view mirrors (ORVMs) that are present outside on the left side and the right side of the vehicle near front doors, an inside rear view mirror (IRVM) that is provided near the top of a front windshield of the vehicle, and a head-up display (HUD) that is a transparent display that presents data without requiring users to look away from their usual viewpoints. The ORVMs and IRVM are provided to see sideview and rearview respectively of vehicle without much eye strain. HUD is a transparent display that presents data without requiring users to look away from their usual viewpoints
[0003] Vehicle mirrors such as ORVMs and IRVM can be adjusted manually or by using electronic controls situated on the door trim or elsewhere in the interior of the car. However, the driver has to adjust the position of the ORVMs, IRVM, and HUD by his hands according to an optimum line of sight.
[0004] Since there can be a multiple number of drivers that drive a single car, the mirror adjustment needs to be done frequently depending upon the driver's line of sight. Therefore, the angle at which each of the mirrors is set by the previous driver can be non-ideal for another driver depending upon the position of the other driver's line of sight.
[0005] In numerous aspects, the conventional mirror placement is still constrained. The settings that can be saved in traditional mirror placement
systems, for example, are static. The mirrors may no longer be at the proper angle to observe the area behind the car if the driver changes posture in the seat, for example by slouching or shifting to one side or the other.
[0006] In other words, as the driver moves his or her viewing posture, the view in the mirror also changes accordingly. As a result, the driver's ability to perceive the sideview and the rearview of the car that the mirrors were set up to view is impaired. To keep an optimal view of the scene reflected in the mirror with traditional systems, the user must either maintain the same head position throughout the drive or change the mirrors repeatedly.
[0007] Moreover, maintaining the same head position for a long duration is inconvenient. It prevents a driver's natural need to shift posture regularly. Even if the ORVMs, IRVM, and HUD are electrically actuated, the user has to manually set the ORVMs, IRVM using separate control switches which in turn is time-consuming and an erroneous task. The HUD also requires additional time and efforts to be adjusted according to the driver's line of sight. This in turn as a whole is a time-consuming task, as a separate line of action is required for each mirror adjustment and HUD adjustment.
[0008] Furthermore, most of the time, the driver forgets to adjust the mirrors before starting the car leading to safety concerns, which can lead to accidents and other such problems on the road.
[0009] Therefore, there is a need to provide a system for vehicle mirrors and head-up display that is configured to automatically adjust the position of each mirror and head-up display depending upon the position of the driver's eye and at the same time ignore the temporary movements of the driver's line of sight.
OBJECTS OF THE DISCLOSURE
[0010] It forms an object of the present disclosure to overcome the aforementioned and other drawbacks/limitations in the existing solutions available in the form of related prior arts.
[0011] It is a primary object of the present disclosure to provide a system and a method for automatically adjusting the vehicle mirrors and head-up display.
[0012] It is another object of the present disclosure to automatically adjust the vehicle mirrors and head-up display based on the user's eye position.
[0013] It is another object of the present disclosure to automatically adjust the vehicle mirrors and head-up display based on the most prominent position of the user.
[0014] It is another object of the present disclosure to automatically neglect the user's positions that occur only a few times thus avoiding any undesired mirror and head-up display positions.
[0015] It is another object of the present disclosure to provide a method that recognizes the most prominent position of different users and adjust the position of mirrors and head-up display accordingly.
[0016] These and other objects and advantages of the present subject matter will be apparent to a person skilled in the art after consideration of the following detailed description taken into consideration with accompanying drawings in which preferred embodiments of the present subject matter are illustrated.
SUMMARY
[0017] A solution to one or more drawbacks of existing technology and additional advantages are provided through the present disclosure. Additional features and advantages are realized through the technicalities of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered to be a part of the claimed disclosure.
[0018] The present disclosure offers a solution in the form of a method and a system for automatically adjusting vehicle mirrors and head-up display (HUD) based on eye position of a user. The method including detecting vehicle ignition ON condition and detecting an eye position of a user sitting on a driver seat of a
vehicle at a predetermined time interval continuously. After that, determining 'Ye' and 'Ze' parameters of the eye position based on the detected eye positions. The method further includes comparing the determined 'Ye' and 'Ze' parameters with the stored ' Ye' and 'Ze' parameter values and updating the parameter storing table. Further, the method includes determining position estimation values of ORVMs, IRVM, and HUD based on the Ye and Ze parameter corresponding to highest weight value and adjusting the ORVMs, the IRVM, and the HUD based on the determined position values.
[0019] In an aspect of the invention, the stored 'Ye' and 'Ze' parameters are reset after the predetermined time interval.
[0020] In an aspect of the invention, the 'Ye' parameter is associated with vertical positioning of the eye position with respect to a depth camera, and the 'Ze' parameter is associated with a depth of the eye position with respect to the depth camera.
[0021] In an aspect of the invention, the determined 'Ye' and 'Ze' parameters are stored in a parameter storing table, and wherein the method comprises sorting the entries in the parameter storing table based on values of weights.
[0022] In an aspect of the invention, storing the determined 'Ye' and 'Ze' parameters comprises of determining whether the ' Ye' and 'Ze' parameter values are already stored in the parameter storing table. In response to a determination that the 'Ye' and 'Ze' parameters values are stored in the parameter storing table, increasing the weight value corresponding to the ' Ye' and 'Ze' parameter by a unit and if the 'Ye' and 'Ze' parameter is not stored in the parameter storing table, storing the 'Ye' and 'Ze' parameter values in the parameter storing table and assigning a unit weight.
[0023] In an aspect of the invention, the position estimation parameter 'p;' and 'qi' of ORVMs, IRVM, and HUD is determined. The 'Ye' and 'Ze' values in parameter storing table having highest weight value are used for predicting 'p;' & 'qi' values using parameter prediction logic.
[0024] In another aspect of the invention, the system for automatically adjusting vehicle mirrors and head-up display includes a depth camera to continuously detect an eye position of a user sitting on a driver seat of a vehicle at a predetermined time interval, a control module coupled to the depth camera to process 'Ye' and 'Ze' parameter values of the eye position obtained from the depth camera and an ORVM controller, an IRVM controller, and a HUD controller coupled to the control module, wherein the ORVM controller and IRVM controller controls and adjust the mirrors and HUD controller controls and adjust the head-up display to a most favorable condition preferred by the user.
[0025] In an aspect of the invention, the system comprises a comparator configured to compare the 'Ye' and 'Ze' parameters with stored 'Ye' and 'Ze' parameter values. Further, the comparator updates a parameter storing table.
[0026] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0027] It is to be noted, however, that the appended drawings illustrate only typical embodiments of the present subject matter and are therefore not to be considered for limiting of its scope, for the present disclosure may admit to other equally effective embodiments. The detailed description is described with reference to the accompanying figures. In the figures, a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system or methods or structure in accordance with embodiments of the present subject matter are now described, by way of example, and with reference to the accompanying figures, in which:
[0028] Fig. 1 illustrates a system for automatically adjusting mirrors and head-up display of a vehicle according to the present disclosure;
[0029] Fig. 2 illustrates a tracking position of the user's head according to the present disclosure.
[0030] Fig. 3 illustrates a block diagram of the system according to the present disclosure.
[0031] Fig. 4 illustrates a parameter storing table according to the present disclosure.
[0032] Fig. 5 illustrates a flow chart of a method for adjusting the mirrors and HUD according to the present disclosure.
[0033] Fig. 6 illustrates the method for adjusting the mirrors and HUD according to the present disclosure.
[0034] The figures depict embodiments of the present subject matter for illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF INVENTION
[0035] The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0036] It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
[0037] It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0038] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0039] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0040] Hereinafter, a description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the present disclosure.
[0041] The present invention relates to a system and a method of adjusting mirrors of a vehicle such as an outside rear view mirror (ORVM) and an inside rear view mirror (IRVM), and a head-up display (HUD) according to the position of a user's eye. The system automatically configures and changes the mirror and HUD position in case there is change in seating position or a different user occupies the driver seat.
[0042] Further, the mirrors and HUD are adjusted automatically according to the most prominent position of the user and thereby neglects the positions of the user that occur only a few times thus avoiding any undesired mirror and HUD positions, such as if the user bends down a little bit to access the storage box on dash panel for few seconds. The method identifies the most prominent position of the user and determines the most favorable position of the mirrors and HUD from a parameter storing table.
[0043] Fig. 1 shows a system (100) for automatically adjusting mirrors and head-up display of a vehicle according to the present disclosure. The system (100) includes a depth camera (108), a control unit (109) connected to the depth camera (108), two ORVMs (102a, 102b) having ORVM controller (103a, 103b) placed at left side and right side of the car, an IRVM (106) having an IRVM controller (107) placed on top-centre of the windshield on the roof and an HUD linked (104) to an HUD controller (105) is provided on the vehicle dashboard. The ORVMs (102a, 102b), IRVM (106) and HUD (104) are connected to the control unit (109) through controllers (103a, 103b, 105, 107).
[0044] The depth camera (108) is provided near IRVM (106) to track eye sight of the driver based on depth image data of the driver (101). The depth camera (108) is configured to continuously track the eye sight of the driver of the vehicle. The depth camera (108) is capable of calculating the depth of each pixel along with its RGB value. The depth camera (108) measures the 'ye' and 'ze' parameters. The 'ye' represents the height and 'ze' represents the depth of the driver (101) when the driver (101) moves forward or backward. The depth camera (108) is coupled to the control unit (109) and sends 'ye' and 'ze' parameters to it.
[0045] The control unit (109) is further coupled to the ORVM controllers (103a, 103b), IRVM controller (107), and HUD controller (105). The control unit (109) determines the values of 'ye' and 'ze' parameters received from the depth camera (108) and further processes the parameter values and sends the command to the ORVM controllers (103a, 103b), IRVM controller (107), and HUD controller (105). The ORVM controller (103a, 103b), IRVM controller (107), and HUD controller (105) are further connected to a corresponding motor to adjust the mirrors or HUD based on position estimation values 'pi' and 'qi' corresponding to 'Ye' and 'Ze' parameters values with highest weight.
[0046] The ORVMs (102a, 102b) are attached on the left side and the right side of the vehicle. Each ORVM (102a, 102b) has the ORVM controller (103a, 103b) that is further connected to respective motor.
[0047] An IRVM (106) is provided on the top-center of the windshield on the roof and it has the IRVM controller (107) and a motor.
[0048] A HUD (104) is provided at the dashboard of the vehicle and has the HUD controller (105) and a motor.
[0049] The motors receive signals from the ORVM controller (103a, 103b), IRVM controller (107) and HUD controller (105) to adjust and set the mirrors (102a, 102b, 104) and the HUD (106).
[0050] Referring to Fig. 2, it shows a coordinate system for monitoring the eye gaze using the depth camera (108). A face of a user (101) is shown along with a point 'R' (201) that is provided for tracking in 'ye' and 'ze' where 'ye' and 'ze' are eye coordinates of the user (101). The 'ye' represents the height and 'ze' represents the depth of the user (101) when the user (101) moves forward or backward. Therefore, the 'ye' and 'ze' are tracked using the point 'R' (201). With the help of 'ye' and 'ze' parameters attained from the depth camera (108), eye gaze of the user (101) is determined.
[0051] Referring to Fig. 3, it shows a circuit diagram of the system 100. The ciruit (306) includes a pair of AND gates (302) and a system CPU (305). Each of
the AND gates (302) take one continuous analog input 'ye' and 'ze' from the depth camera controller (301) and other input from a clock with time period 't'. The output from the AND gates give 'Ye' and 'Ze' values continuously after every predetermined time interval.
[0052] The outputs from the AND gates are provided to the system CPU (305). The system CPU (305) includes a comparator (303) and a parameter predition logic (304). The outputs from the AND gates i.e. 'Ye' and 'Ze' parameters are provided to comparator (303) as inputs. Comparator (303) compares the received 'Ye' and 'Ze' values with the stored parameter values in parameter storing table. Based on the comparator output if received values 'Ye' and 'Ze' are present in parameter storing table then corresponding weight in increased by 1 else ' Ye' and 'Ze' are added in parameter storing table and assigned unit weight in each predefined time interval.
[0053] Parallelly, the 'Ye' and 'Ze' values in parameter storing table having highest weight value are used for predicting 'p;' & 'q;' values using parameter prediction logic. The predicted 'pi' & 'qi' values are used to adjust the ORVMs, the IRVM, and the HUD positions. The 'pi' & 'qi' values refers to coordinates of the ORVMs, the IRVM, and the HUD corresponding to their positions.
[0054] The predicted parameters are 'pi' and 'qi' for left ORVM controller (103b), 'p2' and 'q2' for IRVM controller (107), 'p3' and 'q3' for right ORVM controller (103a) and 'p4' and 'q4' for HUD controller (105) to adjust and set the mirrors and the HUD. Here, parameter 'p' represents up-down movement and 'q' represents left-right movement.
[0055] Therefore the ORVMs, the IRVM, and the HUD controllers (103a, 103b, 107, 105) adjust the position of mirrors and HUD based on the 'p' and 'q' values received from the system CPU (305).
[0056] Fig. 4 illustrates a parameter storing table according to the present disclosure. The system CPU stores the parameter storing table (400). Initially, when the vehicle ignition switch is turned ON, the parameter storing table stores
the ' Yp' and 'Zp' parameter values corresponding to the previous most prominent eye position, and a predefined weight value (For e.g., "N") is assigned to it. After the ignition switch is turned ON, the depth camera continously detects 'Ye' and 'Ze values, the values of 'Ye' and 'Ze as determined by the depth camera are stored in the parameter storing table. The values of 'Ye' and 'Ze are compared to the already stored values in the parameter storing table (400), and if the newly detected values are not present in the parameter storing table (400), a new entry is created in the parameter storing table (400) and a unit weight is assigned against the newly created entry. If the newly detected values are present in the parameter storing table (400), then the weight stored against the detected values is increased by value 1.
[0057] Further, the values stored in parameter storing table (400) keeps on sorting such that 'Ye and 'Ze' entries having highest weight values remain on top of the table. The 'Ye and 'Ze' entries having highest weight values represent the most prominent position of the user.
[0058] Further, the weights stored against the 'Ye' and 'Ze' are reset in every predefined time interval. Such resetting results in very less response time for mirror and HUD adjustment in case of new users whose prominent position is different from previous user.
[0059] In an alternative embodiment, a face identification parameter is also stored in the parameter storing table. The face identification parameter is a parameter that identifies the user and the face identification parameter is stored corresponding to 'Ye' and 'Ze' values. Upon detection of a face identification parameter that is different from the earlier stored face parameter i.e. user on the driving seat has changes, enties in the parameter storing table gets reset with immediate effect.
[0060] Referring to Fig. 5, a flow chart of a method for adjusting the mirrors and HUD according to the present disclosure is provided. The mirrors and HUD are adjusted based on determination of prominent position of the driver. The term IG-
ON means that the iginition is turned ON and the term IG-OFF means that the ignition is turned OFF.
[0061] In the step (501), it is determined if the ignition of the vehicle is turned 'ON'i.e., IG-ONornot.
[0062] In step (502), if the ignition is turned ON, the 'Yp' and 'Zp' value that was stored from the last process when the ignition was turned off or IG-OFF is given to 'Ye' and 'Ze' respectively and assigned a weight 'N'.
[0063] In step (503), the 'Ye' and 'Ze' parameter values along with assigned weight 'N' are stored in a parameter storing table eas shown in FIG. 4. The ' Ye' is height of tracking point (201) and 'Ze' is depth of tracking point (201). The parameter storing table stores the ' Ye' and 'Ze' values along with their weights.
[0064] In step (504), after the predefined time interval, the system takes new ' Ye' and 'Ze' value from the depth camera (108).
[0065] In step (505), the system checks whether the new value of' Ye' and 'Ze' is already stored in the parameter storing table (400). If No, assign a unit weight to 'Ye' and 'Ze' and move to step (503). If Yes, move to step (506).
[0066] In step (506), the weight corresponding to the new 'Ye' and 'Ze' values is in the parameter storing table is increased by 1 and move to step (504) to again obtain new values of' Ye' and 'Ze' after a predetermined time interval.
[0067] This continues for a 'ti, ti, t3 ... tn' period time. Based on weight values, the table keeps getting sorted in descending order after every predetermined time interval.
[0068] For example, let us assume that during the last drive, the most prominent position was '3' and '5'. Both '3' and '5' value gets stored as 'Yp' and 'Zp' at ignition OFF or IG-OFF. At next ignition ON or IG-ON, '3' and '5' are stored as 'Ye' and 'Ze' in the parameter storing table and a weight N (let's say N=50) is assigned to it. Depth camera keeps on capturing 'Ye' and 'Ze' values after every 't' interval. Suppose new values of' Ye' and 'Ze' received are '4' and '8'.
[0069] The system after checking, assign a weight 1 to value '4' and '8' and store it in parameter storing table as it was not earlier present in the table. Similarly, in the next case, if the value comes out to be '3' and '7', it is again given a weight' 1' and stored in parameter storing table. If in further case the value again comes out to be '4' and '8', its weight is increased by 1 and become 2. Therefore, the weight of user's (101) most preferred position will increase rapidly and become highest due to which it will come at the top of parameter storing table.
[0070] The system in dotted lines keeps on continuing till the ignition is turned off. Parallel to that, another system keeps on running.
[0071] In step (509), the system then picks up 'Ye' and 'Ze' values from the parameter storing table that has the highest weight value or which is generally the first value in the table since the table in sorted in descending order.
[0072] In step (510), the values of 'pi' and 'qi' are obtained from the parameter prediction logic corresponding to the values of 'Ye' and 'Ze' having the highest weight. Parameter prediction logic (304) is created either by machine learning or manually by calibration. The 'pi' and 'qi' refers to the corresponding positions of mirrors and HUD.In step (511), the 'pi' and 'qi', 'p2' and 'qi', 'p3' and 'q3' and 'P4' and 'q4' obtained are sent to the respective ORVM, IRVM, and HUD controllers (103a, 103b, 107, 105) to adjust the mirrors and HUD positions accordingly until the IG-OFF condition.
[0073] In step (512), if the ignition is turned off, the most prominent value of 'Ye' and 'Ze' are stored as 'Yp' and 'Zp' as shown in step (513) and the remaining values are cleared off from the table (400).
[0074] In case if the ignition is not turned off and the user (101) is driving and if the weight of the first value is >= k*N as given in step (507) (where weight is the number of times Ye, Ze value repeats itself and k is a constant), the value crosses an estimated or predefined value, reset all the weights and assign N weight to the first value in parameter storing table (400) as in step (508). The reset factor 'k' helps to prevent weight (corresponding to prominent driver position) from
becoming very large by resetting whole table. System response time when driver is changed during same IG-ON cycle directly depends on k. The minimum response time is N*t and the maximum response time is N*t*k.
[0075] The step (508) is done to avoid the delay in response time of the mirror and HUD adjustment if driver is changed. For example, if the value of the weight reaches 1000 before driver is changed, then time required for new user most preferred position's weight to cross 1000 starting from 1 will be high, resulting in delay of resetting mirrors and HUD for new user (101).
[0076] Referring to Fig. 6, it shows the method (600) for automatically adjusting vehicle mirrors and HUD. The step (601) comprises detecting the vehicle ignition (IG) ON condition. If the IG is ON, in step (602), storing the previous most prominent eye position ('Yp', 'Zp') at Ignition Off (IG-OFF) as 'Ye' and 'Ze' in table and assigning a pre-determined weight value "N". Next at step (603), eye position of the user (101) sitting on the driver seat of the vehicle is continuously detected. The eye position detection is repeated after apredetermined time interval by a depth camera (108). In the step (604), the depth camera (108) captures the new 'Ye' and 'Ze' parameters of an eye based on detected eye positions ('ye', 'ze'). The 'ye' parameter is associated with vertical positioning of the eye position with respect to the depth camera (108), and the 'ze' parameter is associated with a depth of the eye position with respect to the depth camera (108). The depth camera (108) is capable of calculating the depth of each pixel along with its RGB value. The 'ye' and 'ze' parameters obtained from depth camera are converted into periodic values 'Ye' and 'Ze'.
[0077] In the step (605), the new 'Ye' and 'Ze' parameter values are compared with the stored parameter value 'Ye' and 'Ze' already stored in the table. Based the comparison, the weights of the parameter values are updated and the table is sorted in descending order according to the weight values.
[0078] In other words, in the step (605), 'Ye' and 'Ze' are obtained continuously after every predetermined time interval based on detection of the eye position. The system keeps on checking whether the ' Ye' and 'Ze' parameters obtained from the
depth camera (108) after a pre-determined interval are the same or not in the parameter storing table. In case they are the same, the system increases the weight by 1 and the table gets sorted in descending order. In case of the values are not the same, the 'Ye' and 'Ze' parameters are kept on storing in the parameter storing table and a unit weight is assigned to it.
[0079] In the step (606), 'Ye' & 'Ze' parameter corresponding to highest weight value on the parameter storing table is taken to be the most favorable position on which the user (101) is sitting. The 'Ye' and 'Ze' values in parameter storing table having highest weight value are used for predicting 'p;' & 'q;' values using parameter prediction logic. The predicted 'pi' & 'qi' values are used to adjust the ORVMs, the IRVM, and the HUD positions.
[0080] In the step (607), the position estimation values of ORVM (102a, 102b), IRVM (106), and HUD (104) are sent to the ORVM, IRVM, and HUD controllers (103a, 103b, 107, 105) to adjust the mirrors and HUD based on the determined position values. The determined position values are the most favorable position or the most prominent position of the user (101).
[0081] In an aspect of the present invention, the weight values are increased by a unit or "1", however, the weight values can be increased by any amount based on the calibration and experiments.
TECHNICAL ADVANTAGES
[0082] With the help of the solution as proposed herein in the context of the present disclosure, the system and method identify the most prominent position of the user driving the vehicle and adjust the position of mirrors and HUD accordingly.
[0083] The present disclosure is able to determine if the user gets changed and automatically changes and adjusts the mirrors and HUD position according to the user who is driving.
[0084] The present disclosure provides an auto configurable system that can be calibrated with high precision and thereby reduce error pertaining to mirror and HUD adjustment.
[0085] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation, no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to disclosures containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. Also, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general, such construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together,
R nr\r\ C tnopthpr anH/nr A R nr\r\ C tnopthpr ftr ^ Tn thn
| # | Name | Date |
|---|---|---|
| 1 | 202111048854-STATEMENT OF UNDERTAKING (FORM 3) [26-10-2021(online)].pdf | 2021-10-26 |
| 2 | 202111048854-FORM 1 [26-10-2021(online)].pdf | 2021-10-26 |
| 3 | 202111048854-FIGURE OF ABSTRACT [26-10-2021(online)].jpg | 2021-10-26 |
| 4 | 202111048854-DRAWINGS [26-10-2021(online)].pdf | 2021-10-26 |
| 5 | 202111048854-DECLARATION OF INVENTORSHIP (FORM 5) [26-10-2021(online)].pdf | 2021-10-26 |
| 6 | 202111048854-COMPLETE SPECIFICATION [26-10-2021(online)].pdf | 2021-10-26 |
| 7 | 202111048854-Proof of Right [23-11-2021(online)].pdf | 2021-11-23 |
| 8 | 202111048854-FORM-26 [23-11-2021(online)].pdf | 2021-11-23 |
| 9 | 202111048854-FORM 18 [11-04-2022(online)].pdf | 2022-04-11 |
| 10 | 202111048854-Others-190422.pdf | 2022-04-20 |
| 11 | 202111048854-GPA-190422.pdf | 2022-04-20 |
| 12 | 202111048854-Correspondence-190422.pdf | 2022-04-20 |
| 13 | 202111048854-FER.pdf | 2023-10-05 |
| 14 | 202111048854-FER_SER_REPLY [29-03-2024(online)].pdf | 2024-03-29 |
| 15 | 202111048854-CORRESPONDENCE [29-03-2024(online)].pdf | 2024-03-29 |
| 16 | 202111048854-CLAIMS [29-03-2024(online)].pdf | 2024-03-29 |
| 17 | 202111048854-POA [25-06-2024(online)].pdf | 2024-06-25 |
| 18 | 202111048854-FORM 13 [25-06-2024(online)].pdf | 2024-06-25 |
| 19 | 202111048854-AMENDED DOCUMENTS [25-06-2024(online)].pdf | 2024-06-25 |
| 20 | 202111048854-US(14)-HearingNotice-(HearingDate-27-10-2025).pdf | 2025-10-06 |
| 21 | 202111048854-Correspondence to notify the Controller [24-10-2025(online)].pdf | 2025-10-24 |
| 22 | 202111048854-FORM-8 [28-10-2025(online)].pdf | 2025-10-28 |
| 23 | 202111048854-Written submissions and relevant documents [07-11-2025(online)].pdf | 2025-11-07 |
| 1 | SearchstrategyE_04-10-2023.pdf |