Abstract: ABSTRACT A SYSTEM AND A METHOD FOR GESTURE CONTROLLED OUTSIDE REAR VIEW MIRRORS OF A VEHICLE The present disclosure discloses a system(100) and a method(200) for gesture controlled outside rear view mirror. The system(100) comprises an in-cabin imaging device(108a) to capture live streaming video data of a cabin space of the vehicle; a gesture recognition device(102) to receive live streaming video data; a gesture recognition unit(110a) to detect landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules; a gesture classification unit(110b) to perform time series analysis of detected landmarks using a set of classification rules for classifying detected landmarks as a gesture signal in the received live streaming video data; a gesture-to-network mapper(110c) to convert gesture signal into a controller area network (CAN) control signal; a body function device(112) to receive CAN control signal from the CAN and generate an ORVM motor control signal for gesture controlling of the ORVM.
Description:FIELD OF INVENTION
The present disclosure generally relates to the field of control systems. More particularly, the present disclosure relates to a system and a method for gesture controlled outside rear view mirrors of a vehicle.
DEFINITIONS
As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
Convolution LSTM: The term ‘convolution LSTM’ hereinafter refers to a type of recurrent neural network that leverages the best features of both convolutional neural network (CNN) and Long short-term memory (LSTM) in order to model a classifier for a spatio-temporal prediction that has a convolutional structure in both the input-to-state and state-to-state transitions.
Electronic Control Unit (ECU): The term ‘Electronic Control Unit (ECU)’ hereinafter refers to an embedded system in automotive electronics that controls one or more electrical/electronic systems or subsystems in a car or other motor vehicle.
Inside rear view mirror (IRVM): The term ‘inside rear view mirror (IRVM)’ hereinafter refers to a mirror in a car or any other automobile, which is made in a way that allows the driver to observe rearward vehicles without much eye strain. IRVM is generally mounted on a central upper side of a front windshield inside a vehicle so that it can provide a field of view through a rear windshield of the vehicle.
Outside rear view mirror (ORVM): The term ‘outside rear view mirror (ORVM)’ hereinafter refers to a mechanically or electrically adjustable mirror mounted on the front doors of a vehicle to allow a driver of the vehicle to see behind the vehicle.
Controller area network (CAN): The term ‘controller area network (CAN)’ hereinafter refers to a robust automobile communication standard designed to allow different ECUs and devices to communicate with each others’ applications through a common serial communication bus without much complexity.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
Outside rear view mirrors (ORVMs) are generally mounted on the outer side of the front doors of a vehicle. The ORVMs are mounted for allowing a driver of the vehicle to see behind the vehicle. A viewing angle of these ORVMs is typically adjusted by the driver based on the mounting position of the ORVMs or the driving habits of the driver. In particular, the viewing angle of the ORVMs are adjusted either mechanically through mechanical levers or electronically through electric motors operable through control switches.
However, since the viewing angle of the ORVMs is set before the driver starts driving the vehicle, it is difficult to locate the mechanical levers or electronic control switches for adjusting the viewing angle of the ORVMs when the vehicle is moving forward. This is because the mechanical levers or electronic control switches are generally placed on the inner trim of the front doors of the vehicle. Although the mechanical levers or electronic control switches are ergonomically positioned, the drivers in most of the vehicles are not able to locate these levers or switches while driving the vehicle.
Therefore, it is felt a need for a system and a method for allowing the driver to adjust the viewing angle of the ORVMs without locating the mechanical levers or electronic control switches of the ORVMs.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
The main object of the present disclosure is to provide a system for gesture controlled outside rear view mirrors (ORVM) of a vehicle.
Another object of the present disclosure is to provide a mechanism for adjusting the viewing angles of the ORVMs for a vehicle without using mechanical levers or electronic control switches.
Yet another object of the present disclosure is to provide a mechanism for implementing gesture recognition systems for adjusting the viewing angles of the ORVMs for the vehicle.
Still another object of the present disclosure is to provide a mechanism for implementing a recognition deep learning model which recognizes the user gestures for adjusting the viewing angles of the ORVMs for the vehicle.
Yet another object of the present disclosure is to provide a mechanism for providing a more delightful user experience to a driver of the vehicle while the driver adjusts the viewing angles of the ORVMs without operating mechanical levers or electronic control switches of the ORVMs.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a gesture recognition device for gesture controlled outside rear view mirrors (ORVMs). The gesture recognition device includes a repository, an input unit, and a microcontroller. The repository is configured to store a set of rules, a set of predefined instructions, a set of gesture recognition rules, and a set of classification rules. The input unit continuously receives a live streaming video data from an in-cabin imaging device. The microcontroller receives the live streaming video data from the input unit.
The microcontroller includes a gesture recognition unit, a gesture classification unit, and a gesture-to-network mapper.
The gesture recognition unit is configured to detect landmarks corresponding to human palms and fingers in each image frame of the received live streaming video data using the set of gesture recognition rules.
The gesture classification unit is configured to perform a time series analysis of the detected landmarks using the set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data.
The gesture-to-network mapper is configured to convert the gesture signal into a controller area network (CAN) control signal for gesture controlling of the ORVM over the CAN.
In an aspect, the in-cabin imaging device is an infrared (IR) camera mounted below an inside rear view mirror (IRVM) along with an IR illumination device to capture the live streaming video data of a predefined area present in front of the in-cabin imaging device in a cabin space of the vehicle.
In an aspect, the set of gesture recognition rules is built based on a machine learning based human palms and fingers tracking mechanism.
In an aspect, the set of classification rules is implemented using a pre-trained convolutional long short-term memory (LSTM) neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal.
The present disclosure further envisages a system for gesture controlled outside rear view mirrors (ORVMs) of a vehicle. The system includes an in-cabin imaging device, a gesture recognition device, and a body function device.
The in-cabin imaging device is to capture a live streaming video data of a predefined area present in front of the in-cabin imaging device in a cabin space of the vehicle.
The gesture recognition device is to receive the live streaming video data from the in-cabin imaging device. The gesture recognition device includes a gesture recognition unit, a gesture classification unit, and a gesture-to-network mapper.
The gesture recognition unit is configured to detect landmarks corresponding to human palms and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules.
The gesture classification unit is configured to perform gesture classification unit to perform time series analysis of the detected landmarks using a set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data.
The gesture-to-network mapper is to convert the gesture signal into a controller area network (CAN) control signal.
The body function device is configured to receive the CAN control signal from the CAN and is further configured to generate an ORVM motor control signal based on the CAN control signal for gesture controlling of the ORVM over the CAN.
In an aspect, the in-cabin imaging device is an infrared (IR) camera mounted below an inside rear view mirror (IRVM) along with an IR illumination device to capture the live streaming video data of the cabin space of the vehicle.
In an aspect, the set of gesture recognition rules is built based on a machine learning based human palm and fingers tracking mechanism.
In an aspect, the set of classification rules is implemented using a pre-trained convolutional long short-term memory (LSTM) neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal.
In an aspect, the gesture recognition device is a gesture recognition electronic control unit (GR-ECU), and the body function device is a body function electronic control unit (F-ECU).
In an aspect, the system includes an electric motor that is connected to the ORVM to receive the ORVM motor control signal from the body function device for controlling the movement of the ORVM.
In an aspect, the movement of the ORVM includes the movement of ORVM in a control direction including left side direction, right side direction, upward direction, and downward direction.
The present disclosure further envisages a method for gesture controlled outside rear view mirror (ORVM). The method comprises the following steps:
• capturing, by an in-cabin imaging device, a live streaming video data of a predefined area present in front of the in-cabin imaging device in a cabin space of the vehicle;
• receiving, by a gesture recognition device, the live streaming video data from the in-cabin imaging device;
• detecting, by a gesture recognition unit of the gesture recognition device, landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules;
• performing, by a gesture classification unit of the gesture recognition device, a time series analysis of the detected landmarks using a set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data;
• converting, by a gesture-to-network mapper of the gesture recognition device, the gesture signal into a controller area network (CAN) control signal;
• receiving, by a body function device, the CAN control signal from the CAN; and
• generating, by the body function device, an ORVM motor control signal based on the CAN control signal for gesture controlling of the ORVM over the CAN.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system and a method for gesture controlled outside rear view mirror of the present disclosure will now be described with the help of the accompanying drawings, in which:
Figure 1 illustrates a block diagram of a system for gesture controlled outside rear view mirror, in accordance with an embodiment of the present disclosure;
Figure 2 illustrates a flow diagram of a method for gesture controlled outside rear view mirror, in accordance with an embodiment of the present disclosure; and
Figure 3 illustrates a high-level network architecture of the system for gesture controlled outside rear view mirror, in accordance with an embodiment of the present disclosure.
LIST OF REFERENCE NUMERALS
100 - System
102 - Gesture Recognition Device
104 - Outside Rear View Mirror (ORVM)
106 - Repository
108 - Input Unit
108a - In-Cabin Imaging Device
108b - IR Illumination Device
110 - Microcontroller
110a - Gesture Recognition Unit
110b - Gesture Classification Unit
110c- Gesture-To-Network Mapper
112 - Body Function Device
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawings.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms “including,” and “having,” are open-ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being “engaged to,” "connected to," or "coupled to" another element, it may be directly engaged, connected or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
Outside rear view mirrors (ORVMs) are generally mounted on the outer side of the front doors of a vehicle. The ORVMs are mounted for allowing a driver of the vehicle to see behind the vehicle. A viewing angle of these ORVMs is typically adjusted by the driver based on the mounting position of the ORVMs or the driving habits of the driver. In particular, the viewing angle of the ORVMs are adjusted either mechanically through mechanical levers or electronically through electric motors operable through control switches.
However, since the viewing angle of the ORVMs is set before the driver starts driving the vehicle, it is difficult to locate the mechanical levers or electronic control switches for adjusting the viewing angle of the ORVMs when the vehicle is moving forward.
To overcome the above-mentioned problem, the present disclosure proposes a system (hereinafter referred to as “system 100”) and a method (hereinafter referred to as “method 200”) for gesture controlled outside rear view mirrors (ORVMs) of a vehicle. In the system 100, the use of gesture recognition is proposed to control an electrical ORVM, thereby replacing the need for manual control and switches. This approach not only reduces the cost of switches, but also provides a more ergonomic and safe way to control the ORVM. The system 100 and method 200 are now being described with reference to Figure 1 and Figure 2.
Referring to Figure 1, the system 100 includes a gesture recognition device 102, an Outside Rear View Mirror (ORVM) 104, a repository 106, an input unit 108, a microcontroller 110, an in-cabin imaging device 108a, and a body function device 112.
The ORVM 104 is an electronically operated ORVM that is adjusted using an electric motor upon receiving an ORVM motor control signal generated in response to a hand gesture performed by a user of the vehicle in a predefined area present in front of the in-cabin imaging device 108a.
The in-cabin imaging device 108a is configured to capture a live streaming video data of a predefined area present in front of the in-cabin imaging device in a cabin space of the vehicle.
Once the live streaming video data is captured, the gesture recognition device 102 receives the captured live streaming video data for gesture controlled ORVM 104. The gesture recognition device 102 includes a repository 106, an input unit 108, and a microcontroller 110. The repository 106 is configured to store a set of rules, a set of predefined instructions, a set of gesture recognition rules, and a set of classification rules. The input unit 108 is configured to continuously receive the captured live streaming video data from the in-cabin imaging device 108a to determine or detect the hand gesture performed by the user. When the hand gesture is detected, the input unit 108 transmits the captured live streaming video data to the microcontroller 110.
The microcontroller 110 includes a gesture recognition unit 110a, a gesture classification unit 110b, and a gesture-to-network mapper 110c. On receipt of the captured live streaming video data, the gesture recognition unit 110a detects the landmarks corresponding to human palms and fingers in each image frame of the received live streaming video data using the set of gesture recognition rules. Thereafter, the gesture classification unit 110b performs a time series analysis of the detected landmarks using the set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data. Following this, the gesture-to-network mapper 110c converts the gesture signal into a controller area network (CAN) control signal for gesture controlling of the ORVM over the CAN.
The CAN control signal is then received by a body function device 112 to generate an ORVM motor control signal for gesture controlling of the ORVM 104 over the CAN.
In an aspect, the in-cabin imaging device 108a is an infrared (IR) camera mounted below an inside rear view mirror (IRVM) along with an IR illumination device 108b to capture the live streaming video data of a cabin space of the vehicle.
In an aspect, the set of gesture recognition rules is built based on a machine learning based human palms and fingers tracking mechanism.
In an aspect, the set of classification rules is implemented using a pre-trained convolutional long short-term memory, LSTM, neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal.
In an aspect, the gesture recognition device 102 is a gesture recognition electronic control unit (GR-ECU), and the body function device 112 is a body function electronic control unit (BF-ECU).
In an aspect, the system 100 includes an electric motor that is connected to the ORVM 104 to receive the ORVM motor control signal from the body function device 112 for controlling the movement of the ORVM 104.
In an aspect, the movement of the ORVM 014 includes the movement of ORVM 104 in a control direction including left side direction, right side direction, upward direction, and downward direction.
In an aspect, the system 100 may include a soft button on a touch-enabled infotainment screen to communicate the control signal to the ORVM 104 over the CAN.
In an aspect, the system 100 can be configured to be implemented and extended to control any body function such as the sunroof, windows, doors, or any robotic machines using hand gestures of the user of the vehicle.
Figure 2 illustrates a method 200 for gesture controlled outside rear view mirror in accordance with an embodiment of the present disclosure. The order in which method 200 is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement method 200, or an alternative method.
At step 202, the method 200 includes capturing, by an in-cabin imaging device 108a, a live streaming video data of a predefined area present in front of the in-cabin imaging device 108a in a cabin space of the vehicle.
At step 204, the method 200 includes receiving, by a gesture recognition device 102, the live streaming video data from the in-cabin imaging device 108a.
At step 206, the method 200 includes gesture recognition unit 110a of the gesture recognition device 102, landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules.
At step 208, the method 200 includes gesture classification unit 110b of the gesture recognition device 102, a time series analysis of the detected landmarks using a set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data.
At step 210, the method 200 includes converting, by a gesture-to-network mapper 110c of the gesture recognition device 102, the gesture signal into a controller area network (CAN) control signal.
At step 212, the method 200 includes receiving, by a body function device 112, the CAN control signal from the CAN.
At step 214, the method 200 includes generating, by the body function device 112, an ORVM motor control signal based on the CAN control signal for gesture controlling of the ORVM over the CAN.
Figure 3 illustrates a high-level network architecture of the system 100 for reflecting the process flow of operation of a gesture controlled outside rear view mirror (ORVM) in accordance with an exemplary embodiment of the present disclosure. The in-cabin imaging device 108a is configured to capture a live streaming video data when a hand gesture is detected in a predefined area present in front of the in-cabin imaging device 108a in a cabin space of the vehicle. In an aspect, the in-cabin camera 108a consists of an IR camera module mounted below the IRVM along with the IR illuminators capable to capture the image in dark environment in the cabin space. Once the live streaming video data is captured, the gesture recognition device 102 receives the captured live streaming video data for the gesture controlled ORVM 104. The gesture recognition device 102 includes a gesture recognition unit 110a, a gesture classification unit 110b, and a gesture-to-network mapper 110c. On receipt of the captured live streaming video data, the gesture recognition unit 110a detects the landmarks corresponding to human palms and fingers in each image frame of the received live streaming video data using the set of gesture recognition rules. Thereafter, the gesture classification unit 110b performs a time series analysis of the detected landmarks using the set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data. In an aspect, the set of classification rules is implemented using a pre-trained convolutional long short-term memory, LSTM, neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal. In an example, the LSTM neural network deep learning model may also be trained to perform gesture recognition built based on Hand Detector and tracker that can detect and provide 21 landmarks corresponding to human palms and fingers in a single image frame.
Further, the gesture-to-network mapper 110c converts the gesture signal into a controller area network (CAN) control signal and transmits the same to the body function device 112.
The body function device 112 then generates an ORVM motor control signal based on the CAN control signal for gesture controlling of an electric motor that is connected to the ORVM 104 for adjusting the ORVM 104 over the CAN.
In an aspect, the gesture recognition device 102 is a gesture recognition electronic control unit (GR-ECU), and the body function device 112 is a body function electronic control unit (BF-ECU).
The foregoing description of the embodiments has been provided for purposes of illustration and is not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for gesture controlled outside rear view mirror that:
• provide switch free control,
• provide hands-free control over physical control,
• improve user experience,
• provide ergonomic control,
• provide comfort and serenity, and
• easy to use and safe.
The embodiments herein and the various features and advantageous details thereof are explained concerning the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
, Claims:WE CLAIM:
1. A gesture recognition device (102) for gesture controlled outside rear view mirrors (ORVMs) (104) of a vehicle, said gesture recognition device (102) comprising:
• a repository (106) to store a set of rules, a set of predefined instructions, a set of gesture recognition rules, and a set of classification rules;
• an input unit (108) to continuously receive a live streaming video data from an in-cabin imaging device (108a);
• a microcontroller (110), coupled to the input unit (108), to receive the live streaming video data from the input unit (108), the microcontroller (110) comprises:
i. a gesture recognition unit (110a) to detect landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using the set of gesture recognition rules,
ii. a gesture classification unit (110b) to perform a time series analysis of the detected landmarks using the set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data, and
iii. a gesture-to-network mapper (110c) to convert the gesture signal into a controller area network (CAN) control signal for gesture controlling of the ORVM over the CAN.
2. The gesture recognition device (102) as claimed in claim 1, wherein the in-cabin imaging device (108a) is an infrared (IR) camera mounted adjacent to an inside rear view mirror (IRVM) along with an IR illumination device (108b) to capture the live streaming video data of an area present in front of the in-cabin imaging device (108a) in a cabin space of the vehicle.
3. The gesture recognition device (102) as claimed in claim 1, wherein said set of gesture recognition rules are built based on a machine learning based human palm and fingers tracking mechanism and said set of classification rules are implemented using a pre-trained convolutional long short-term memory, LSTM, neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal.
4. A system (100) for gesture controlled outside rear view mirrors(ORVMs) (104) of a vehicle, said system (100) comprising:
• an in-cabin imaging device (108a) to capture a live streaming video data of a predefined area present in front of the in-cabin imaging device (108a) in a cabin space of the vehicle;
• a gesture recognition device (102), to receive the live streaming video data from the in-cabin imaging device (108a), wherein the gesture recognition device (102) includes:
i. a gesture recognition unit (110a) to detect landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules,
ii. a gesture classification unit (110b) to perform time series analysis of the detected landmarks using a set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data, and
iii. a gesture-to-network mapper (110c) to convert the gesture signal into a controller area network (CAN) control signal; and
• a body function device (112) to receive the CAN control signal from the CAN and to generate an ORVM motor control signal based on the CAN control signal for gesture controlling of the ORVM over the CAN.
5. The system (100) as claimed in claim 4, wherein the in-cabin imaging device (108a) is an infrared (IR) camera mounted below an inside rear view mirror (IRVM) along with an IR illumination device to capture the live streaming video data of the cabin space of the vehicle.
6. The system (100) as claimed in claim 4, wherein said set of gesture recognition rules are built based on a machine learning based human palm and fingers tracking mechanism and said set of classification rules are implemented using a pre-trained convolutional long short-term memory, LSTM, neural network for performing the time series analysis of the detected landmarks for classifying the detected landmarks as the gesture signal.
7. The system (100) as claimed in claim 4, wherein said gesture recognition device (102) is a gesture recognition electronic control unit (GR-ECU), and said body function device (112) is a body function electronic control unit (BF-ECU).
8. The system (100) as claimed in claim 4, wherein said system (100) comprises an electric motor connected to the ORVM (104) to receive the ORVM motor control signal from the body function device (112) for controlling the movement of the ORVM (104) wherein the movement of the ORVM (104) includes the movement of ORVM (104) in a control direction including left side direction, right side direction, upward direction, and downward direction.
9. A method (200) for gesture controlled outside rear view mirror, said method (200) comprising the following steps:
• capturing, by an in-cabin imaging device (108a), a live streaming video data of a predefined area present in front of the in-cabin imaging device (108a) in a cabin space of the vehicle;
• receiving, by a gesture recognition device (102), the live streaming video data from the in-cabin imaging device;
• detecting, by a gesture recognition unit (110a) of the gesture recognition device (102), landmarks corresponding to human palm and fingers in each image frame of the received live streaming video data using a set of gesture recognition rules,
• performing, by a gesture classification unit (110b) of the gesture recognition device (102), a time series analysis of the detected landmarks using a set of classification rules for classifying the detected landmarks as a gesture signal in the received live streaming video data,
• converting, by a gesture-to-network mapper (110c) of the gesture recognition device (102), the gesture signal into a controller area network (CAN) control signal;
• receiving, by a body function device (112), the CAN control signal from the CAN; and
• generating, by said body function device (112), an ORVM motor control signal based on the CAN control signal for gesture controlling of the ORVM over the CAN.
Dated this 26th day of December, 2022
_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA – 25
of R.K.DEWAN & CO.
Authorized Agent of Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202241075572-STATEMENT OF UNDERTAKING (FORM 3) [26-12-2022(online)].pdf | 2022-12-26 |
| 2 | 202241075572-REQUEST FOR EXAMINATION (FORM-18) [26-12-2022(online)].pdf | 2022-12-26 |
| 3 | 202241075572-PROOF OF RIGHT [26-12-2022(online)].pdf | 2022-12-26 |
| 4 | 202241075572-FORM 18 [26-12-2022(online)].pdf | 2022-12-26 |
| 5 | 202241075572-FORM 1 [26-12-2022(online)].pdf | 2022-12-26 |
| 6 | 202241075572-DRAWINGS [26-12-2022(online)].pdf | 2022-12-26 |
| 7 | 202241075572-DECLARATION OF INVENTORSHIP (FORM 5) [26-12-2022(online)].pdf | 2022-12-26 |
| 8 | 202241075572-COMPLETE SPECIFICATION [26-12-2022(online)].pdf | 2022-12-26 |
| 9 | 202241075572-FORM-26 [27-12-2022(online)].pdf | 2022-12-27 |
| 10 | 202241075572-REQUEST FOR CERTIFIED COPY [19-12-2023(online)].pdf | 2023-12-19 |
| 11 | 202241075572-FORM 3 [02-01-2024(online)].pdf | 2024-01-02 |
| 12 | 202241075572-FER.pdf | 2025-08-25 |
| 13 | 202241075572-FORM-8 [25-09-2025(online)].pdf | 2025-09-25 |
| 14 | 202241075572-FORM 3 [03-10-2025(online)].pdf | 2025-10-03 |
| 15 | 202241075572-FER_SER_REPLY [31-10-2025(online)].pdf | 2025-10-31 |
| 1 | 202241075572_SearchStrategyNew_E_ExtensiveSearchhasbeencondutctedE_06-03-2025.pdf |