Abstract: Disclosed herein is a method and a system for remote monitoring of a driver (103) of a commercial vehicle (101). The system comprises an in-vehicle vision module (107), and a controller unit (109). The controller unit (109) receives one or more images (111) of the driver (103) driving the commercial vehicle (101) from the in-vehicle vision module (107), determines an active state of the driver (103) based on the received one or more images (111), performs at least one of: generating, based on the determined active state, one or more Controller Area Network (CAN) messages (113) associated with the active state, and receiving, based on the determined active state, one or more visual contents (115) associated with the driver (103) from the in-vehicle vision module (107); and transmits at least one of the one or more generated CAN messages (113) and the generated one or more visual contents (115) to an electronic device (121) operated by a fleet owner (125) associated with the commercial vehicle (101). Fig. 1
Claims:1. A method of remote monitoring a driver (103) of a commercial vehicle (101), the method comprising:
receiving, by a monitoring system (105), one or more images (111) of the driver (103) driving the commercial vehicle (101) from an in-vehicle vision module (107);
determining, by the monitoring system (105), an active state of the driver (103) based on the received one or more images (111);
performing, by the monitoring system (105), at least one of:
generating, based on the determined active state, one or more Controller Area Network (CAN) messages (113) associated with the active state, and
generating, based on the determined active state, one or more visual contents (115) associated with the driver (103) by a controller unit (109) of the monitoring system (105);
transmitting, by the monitoring system (105), at least one of the one or more generated CAN messages (113) and the generated one or more visual contents (115) to an electronic device (121) operated by a fleet owner (125) associated with the commercial vehicle (101).
2. The method as claimed in claim 1, wherein determining the active state comprises:
determining movement of eyes’ pupils and head movement of the driver (103) in the received one or more images (111); and
determining the active state as one of a drowsiness state and an in-attentiveness state based on the determined movement of eyes’ pupils and head movement utilizing one or more predefined Machine Learning (ML) models.
3. The method as claimed in claim 2, further comprises:
determining a severity level associated with the active state based on the determined movement of eyes’ pupils and head movement.
4. The method as claimed in claim 1, further comprises:
providing at least one of an audio alert and a notification to the driver (103) based on the active state and the severity level associated with the active state, wherein the notification is provided on a Human Machine Interface (HMI) associated with the monitoring system (105).
5. The method as claimed in claim 1, wherein each of the one or more CAN messages (113) comprises a plurality of information associated with the driver (103), wherein the plurality of information associated with the driver (103) comprises identification information of the driver (103), the active state of the driver (103), the severity level associated with the active state, current location information associated with the commercial vehicle (101), and time stamp information associated with the active state.
6. The method as claimed in claim 1, wherein transmitting the at least one of the one or more CAN messages (113) and the one or more visual contents (115) to the electronic device (121) comprises:
transmitting, by the monitoring system (105), the one or more CAN messages (113) to a Telematics Control Unit (TCU) (117) communicatively coupled with the monitoring system (105) over a CAN bus, and the one or more visual contents (115) to the TCU (117) over a first wireless network; and
transmitting, by the TCU (117), the at least one of the one or more CAN messages (113) and the one or more visual contents (115) to the electronic device (121) operated by the fleet owner (125) via a cloud server (119) over a second wireless network.
7. The method as claimed in claim 1, further comprises:
detecting a fault in at least one of the in-vehicle vision module (107), the controller unit 109 and interconnecting wiring harness;
generating one or more error CAN messages based on the detected fault; and
transmitting the one or more generated error CAN messages to the electronic device (121) operated by the fleet owner (125) via a TCU (117) of the commercial vehicle (101) and the cloud server (119).
8. A monitoring system (105) for remote monitoring of a driver (103) of a commercial vehicle (101), the monitoring system (105) comprises:
an in-vehicle vision module (107) comprising an image sensor (301) and a plurality of Infrared (IR) Light Emitting Diodes (LEDs) (303);
a controller unit (109), communicatively coupled to the in-vehicle vision module (107), wherein the controller unit (109) is configured to:
receive one or more images (111) of the driver (103) driving the commercial vehicle (101) from the in-vehicle vision module (107);
determine an active state of the driver (103) based on the received one or more images (111);
perform at least one of:
generating, based on the determined active state, one or more Controller Area Network (CAN) messages (113) associated with the active state, and
generating, based on the determined active state, one or more visual contents (115) associated with the driver (103);
transmit at least one of the one or more generated CAN messages (113) and the generated one or more visual contents (115) to an electronic device (121) operated by a fleet owner (125) associated with the commercial vehicle (101).
9. The monitoring system (105) as claimed in claim 8, wherein the controller unit (109) is configured to:
determine movement of eyes’ pupils and head movement of the driver (103) in the received one or more images (111); and
determine the active state as one of a drowsiness state and an in-attentiveness state based on the determined movement of eyes’ pupils and head movement utilizing one or more predefined Machine Learning (ML) models.
10. The monitoring system (105) as claimed in claim 9, wherein the controller unit (109) is configured to determine a severity level associated with the active state based on the determined movement of eyes’ pupils and head movement.
11. The monitoring system (105) as claimed in claim 8, wherein the controller unit (109) is configured to provide at least one of an audio alert and a notification to the driver (103) based on the active state and the severity level associated with the active state, wherein the notification is provided on a Human Machine Interface (HMI) associated with the monitoring system (105).
12. The monitoring system (105) as claimed in claim 8, wherein each of the one or more CAN messages (113) comprises a plurality of information associated with the driver (103) of the commercial vehicle (101), wherein the plurality of information associated with the driver (103) comprises identification information of the driver (103), the active state of the driver (103), the severity level associated with the active state, current location information associated with the commercial vehicle (101), and time stamp information associated with the active state.
13. The monitoring system (105) as claimed in claim 8, wherein the controller unit (109) is configured to:
transmit the one or more CAN messages (113) to a Telematics Control Unit (TCU) (117) communicatively coupled with the monitoring system (105) over a CAN bus, and the one or more visual contents (115) to the TCU (117) over a first wireless network; and
transmit, from the TCU (117), the at least one of the one or more CAN messages (113) and the one or more visual contents (115) to the electronic device (121) operated by the fleet owner (125) via a cloud server (119) over a second wireless network.
14. The monitoring system (105) as claimed in claim 8, wherein the controller unit (109) is configured to:
detect a fault in at least one of the in-vehicle vision module (107), the controller unit 109 and interconnecting wiring harness;
generate one or more error CAN messages based on the detected fault; and
transmit the one or more generated error CAN messages to the electronic device (1210 operated by the fleet owner (125) via a TCU (117) of the commercial vehicle (101) and the cloud server (119).
, Description:TECHNICAL FIELD
The present subject matter is generally related to real-time of tracking driver state and more particularly, but not exclusively, to a method and a monitoring system for remote monitoring of a driver of a commercial vehicle.
BACKGROUND
Generally, road accidents of commercial vehicles occur due to drowsiness state and in-attentiveness state of drivers. Continuous driving for a prolonged period of time and sleep deprivation are one of the main reasons behind the drowsiness state and in-attentiveness state of the drivers. Conventional monitoring systems detect the drowsiness state and in-attentiveness state by monitoring the drivers and provide necessary alerts or warning signals to the driver for restoring an appropriate active state, suitable for safe driving. However, the alerts or warning signals temporarily affect the driver to be awake and attentive for safe driving. Further, in some scenarios, the driver of the commercial vehicle does not properly respond to the alerts or warning signals and continue driving without taking rest for timely delivery of consignments at target locations. To avoid probable road accidents due to the drowsiness state and the in-attentiveness state of the driver in the aforesaid scenarios, intervention of a fleet owner of the commercial vehicle is required to instruct or permit the driver to perform specific actions. However, the fleet owner generally receives the notification after the commercial vehicle is met with an accident. Thus, the conventional monitoring systems lack transmitting real-time active state information of the driver with seamless connectivity between the commercial vehicles monitoring system with the fleet owner for easily tracking of the active state of the driver.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY
One or more shortcomings of the prior art are overcome by a system and a method as claimed and additional advantages are provided through the system and the method as claimed in the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
In one non-limiting embodiment of the present disclosure, a method of remote monitoring a driver of a commercial vehicle is disclosed. The method comprises receiving, by a monitoring system, one or more images of the driver driving the commercial vehicle from an in-vehicle vision module. Upon receiving the one or more images, the method comprises determining, by the monitoring system, an active state of the driver based on the received one or more images. Further, the method comprises performing, by the monitoring system, at least one of generating one or more Controller Area Network (CAN) messages associated with the active state based on the determined active state and generating one or more visual contents associated with the driver by a controller unit of the monitoring system based on the determined active state. Thereafter, the method comprises transmitting, by the monitoring system, at least one of the one or more generated CAN messages and the generated one or more visual contents to an electronic device operated by a fleet owner associated with the commercial vehicle.
In another non-limiting embodiment of the present disclosure a monitoring system for remote monitoring of a driver of a commercial vehicle is disclosed. The system comprises an in-vehicle vision module, and a controller unit, communicatively coupled to the in-vehicle vision module. The in-vehicle vision module comprises an image sensor and a plurality of Infrared (IR) Light Emitting Diodes (LEDs). The controller unit receives one or more images of the driver driving the commercial vehicle from the in-vehicle vision module. Based on the received one or more images, the controller unit determines an active state of the driver. Further, the controller unit performs at least one of generating one or more Controller Area Network (CAN) messages associated with the active state based on the determined active state and generating one or more visual contents associated with the driver based on the determined active state. Thereafter, the controller unit transmits at least one of the one or more generated CAN messages and the generated one or more visual contents to an electronic device operated by a fleet owner associated with the commercial vehicle.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The novel features and characteristic of the disclosure are set forth in the appended claims. The disclosure itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying figures. One or more embodiments are now described, by way of example only, with reference to the accompanying figures wherein like reference numerals represent like elements and in which:
Fig.1 shows an exemplary architecture for remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
Fig.2 shows a sequence diagram for remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
Fig.3 shows a block diagram of a monitoring system for remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
Fig.4a shows an exemplary scenario illustrating remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
Fig.4b-4d show exemplary illustrations of displaying a plurality of webpage screens on an electronic device of a fleet owner in accordance with some embodiments of the present disclosure.
Fig.5 shows a flow chart illustrating a method of remote monitoring a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
The figures depict embodiments of the disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the system and method illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the specific forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
The terms “comprises”, “comprising”, “includes”, “including” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
Embodiments of the present disclosure may relate to a method and a monitoring system for remote monitoring of a driver of a commercial vehicle. The monitoring system generates one or more Controller Area Network (CAN) messages based on determination of real-time active state of the driver, and immediately transmits the CAN messages to a fleet owner operated electronic device through a Telematics Control Unit (TCU) and a cloud server over CAN bus and cellular network respectively. This enables the fleet owner to easily track the active state of the driver of the commercial vehicle due to seamless connectivity between the monitoring system of the commercial vehicle and the electronic device.
Also, real-time/recorded visual contents which indicates the active state of the driver is transmitted to the fleet owner operated electronic device in real-time or on demand through the TCU and the cloud server. Based on the received CAN messages and the visual contents, the fleet owner may perform necessary actions to alert the driver in case the driver is not awake and attentive in spite of in-vehicle alerts. In this manner, the disclosed monitoring system enables the fleet owner to monitor the driver of the commercial vehicle in real-time to timely prevent collision of the commercial vehicle with neighboring vehicles on a road, or roadside infrastructure. Further, the monitoring system activates at least one of an in-built buzzers, a voice messaging system, an in-vehicle infotainment system and a haptic system associated with the commercial vehicle for providing warning to the driver in a timely manner to avoid occurrence of the hazardous events.
Fig.1 shows an exemplary architecture for remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
As shown in Fig.1, the architecture 100 may include a commercial vehicle 101, a driver 103 driving the commercial vehicle 101, a monitoring system 105 associated with the commercial vehicle 101, a Telematics Control Unit (TCU) 117 communicatively coupled with the monitoring system 105, a cloud server 119, an electronic device 121 with a display screen 123, and a fleet owner 125 operating the electronic device 121. The cloud server 119 communicatively coupled between the TCU 117 of the commercial vehicle 101 and the electronic device 121. The commercial vehicle 101 may include, but not limited to a heavy vehicle, and a light vehicle.
In an embodiment, the monitoring system 105 associated with the commercial vehicle 101 may comprise an in-vehicle vision module 107 and a controller unit 109. The in-vehicle vision module 107 may comprise an image sensor and a plurality of Infrared (IR) Light Emitting Diodes (LEDs). The in-vehicle vision module 107 may be electrically and communicatively coupled with the controller unit 109. As an example, the in-vehicle vision module 107 may communicate with the controller unit 109 based on Universal Serial Bus (USB) protocol. Further, the controller unit 109 may communicate with the TCU 117 through a Controller Area Network (CAN) bus interface. Also, the controller unit 109 may perform communication with the TCU 117 over a first wireless network, based on short-region wireless technology. As an example, the short-region wireless technology may include, but not limited to, Wireless Fidelity (Wi-Fi), Ultra-Wideband (UWB), Zigbee and Bluetooth. Further, the TCU 117 of the commercial vehicle 101 may perform communication with the electronic device 121 of the fleet owner 125 through the cloud server 119 over a second wireless network. As an example, the second wireless network may include, but not limited to a second generation (2G) Global System for Mobile communication (GSM) network, a third generation (3G) cellular network, a fourth generation (4G) Long-Term Evolution (LTE) network, and a fifth generation (5G) cellular networks. Further, the electronic device 121 operated by the fleet owner 125 may include, but not limited to a mobile phone, a smart phone, a laptop, computer, and a tablet.
In the present disclosure, a layout of the monitoring system (105) is accomplished in manner to capture images of different drivers with all possible vehicle seat articulations. The in-vehicle vision module (107) is designed in such a way that it can be mounted at an appropriate location depending on applications. As an example, the in-vehicle vision module (107) may be mounted on A pillar of the commercial vehicle 101, a dashboard, and a head mount with help of suitable bracketry and fasteners. The in-vehicle vision module (107) may be connected to the controller unit 109 via a plurality of interconnecting wiring harness over USB protocol of communication.
Further, in the commercial vehicle 101, the monitoring system 105 may be communicatively coupled with an Engine Management System (EMS) comprising an Electronic Control Unit (ECU) (not shown in figure) through the CAN bus interface. Also, the monitoring system 105 may be communicatively coupled with one or more electronic devices associated with the commercial vehicle 101. The one or more electronic devices may include, but not limited to, an in-built buzzer, a voice messaging system comprising a speaker, an in-vehicle infotainment system comprising a Human Machine Interface (HMI) and a haptic system comprising a plurality of haptic sensors associated with a driver’s seat (not shown in figure). The in-built buzzer, and the speaker of the voice messaging system may be provided at a proximal distance from an ear of the driver 103 driving the commercial vehicle 101. For example, the in-built buzzer and the speaker of the voice messaging system may be provided in the interior part of the commercial vehicle door or any other suitable place near to the driver 103.
In an embodiment, the controller unit 109 may receive the one or more captured images from the in-built vision module 107 based on the USB protocol. The in-built vision module 107 may capture one or more images 111 of the driver 103 driving the commercial vehicle 101, upon an ignition of an engine of the commercial vehicle 101. The one or more captured images may be one of grayscale images, Red Green Blue (RGB) images and binary images.
In an embodiment, the controller unit 109 may determine an active state of the driver 103 based on the received one or more images 111. Particularly, the controller unit 109 may extract a plurality of facial features of the driver 103. Based on the extracted facial features, the controller unit 109 may determine movement of eyes’ pupils and head movement of the driver 103 in the received one or more images 111. Further, the controller unit 109 may determine the active state as one of a drowsiness state and an in-attentiveness state based on the determined movement of eyes’ pupils and head movement. In an embodiment, the controller unit 109 may determine a severity level associated with the active state based on the determined movement of eyes’ pupils and head movement. Particularly, the controller unit 109 may utilize one or more predefined Machine Learning (ML) models, which may be trained to determine the active state and the associated severity level of the driver 103 by tracking the movement of eyes’ pupils and the head movement of the driver 103.
In an embodiment, the controller unit 109 may perform at least one of: generating one or more Controller Area Network (CAN) messages 113 associated with the active state, and generating one or more visual contents 115 associated with the driver 103. Particularly, the controller unit 109 may generate the one or more CAN messages 113 based on the determined active state of the driver 103. Each of the one or more CAN messages 113 may comprise a plurality of information associated with the driver 103 of the commercial vehicle 101. The plurality of information associated with the driver 103 may comprise identification information of the driver 103, the active state of the driver 103, the severity level associated with the active state, current location information associated with the commercial vehicle 101, and time stamp information associated with the active state. As an example, the identification information of the driver 103 may comprise a name of the driver 103, an age of the driver 103, a driving license number associated with the driver 103. Additionally, the controller unit 109 may generate the one or more visual contents 115 indicating the active state of the driver 103.
In an embodiment, the controller unit 109 may provide at least one of an audio alert and a notification to the driver 103 based on the active state and the severity level associated with the active state. The audio alert may be provided to the driver 103 by activating one of the in-built buzzer, and the voice messaging system. Further, the notification may be provided to the driver 103 on the Human Machine Interface (HMI) of the in-vehicle infotainment system of the commercial vehicle 101.
In an embodiment, the controller unit 109 may transmit at least one of the one or more generated CAN messages 113 and the generated one or more visual contents 115 to an electronic device 121 operated by a fleet owner 125 associated with the commercial vehicle 101. Particularly, the controller unit 109 may transmit the one or more CAN messages 113 to the TCU 117 through the CAN bus interface. Further, the controller unit 109 may transmit the one or more visual contents 115 to the TCU 117 over the first wireless network. Thereafter, the TCU 117 of the commercial vehicle 101 may transmit the at least one of the one or more CAN messages 113 and the one or more visual contents 115 to the cloud server 119 over the second wireless network. Upon receiving, the cloud server 119 may store the one or more CAN messages 113 and the one or more visual contents 115 for future use. Also, the cloud server 119 may transmit the received one or more CAN messages 113 and the one or more visual contents 115 to the electronic device 121 operated by the fleet owner 125 over the second wireless network. In this manner, the monitoring system 105 may enable the fleet owner 125 to access the real-time information of the active state of the driver 103 associated with the commercial vehicle 101. Based on the received one or more CAN messages 113 and the one or more visual contents 115, the fleet owner 125 may promptly perform one or more actions, thereby preventing hazardous events caused due to the active state of the driver 103. As an example, the fleet owner 125 may initiate a voice communication with the driver 103 over the cellular network to provide instructions for parking the commercial vehicle 101 and taking rest for a predefined duration.
In an embodiment, the controller unit 109 may detect a fault in at least one of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness. Based on the detected fault, the controller unit 109 may generate one or more error CAN messages. Further, the controller unit 109 may transmit the one or more generated error CAN messages to the electronic device 121 operated by the fleet owner 125 via the TCU 117 of the commercial vehicle 101 and the cloud server 119. Particularly, the controller unit 109 may transmit the one or more generated error CAN messages to the TCU 117 through the CAN bus interface, and the TCU 117 may transmit the one or more generated error CAN messages to the cloud server 119 over the second wireless network. Thereafter, the cloud server 119 may transmit the error CAN messages, in a decoded format, to the electronic device 121 operated by the fleet owner 125 over the second wireless network. In this manner, the monitoring system 105 may enable the fleet owner 125 to access the real-time information related to operational failure of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness. Accordingly, the fleet owner 125 may initiate a voice communication with the driver 103 of the commercial vehicle 101 to take necessary actions.
In an embodiment, based on the active state and the severity level associated with the active state, the controller unit 109 may communicate with the EMS and Body Control Module (BCM) (not shown in figure) over the CAN bus to perform one or more controlling operations. As an example, the controller unit 109 may communicate with the EMS and Body Control Module (BCM) over the CAN bus to perform engine derating, and vehicle immobilization. Also, the controller unit 109 may communicate with the EMS and Body Control Module (BCM) over the CAN bus for activating a plurality of hazard lamps provided on exterior part of the commercial vehicle 101. The activation of the plurality of hazard lamps may provide a visual indication to one or more drivers of nearby vehicles on the road to keep a safe distance from the commercial vehicle 101 to prevent hazardous events.
Fig.2 shows a sequence diagram for remote monitoring of a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
When the driver 103 occupies a driver seat placed inside the commercial vehicle 101 and initiates an ignition of an engine of the commercial vehicle 101, the in-vehicle vision module 107 may initiate capturing the one or more images 111 of the driver 103. At 201, the controller unit 109 of the monitoring system 105 may receive inputs from the ECU of the vehicle over the CAN bus interface. The controller unit 109 of the monitoring system 105 may control the in-vehicle vision module 107 to initiate capturing the one or more images 111 of the driver 103. Further, the controller unit 109 may receive the captured one or more images 111 of the driver 103 from the in-vehicle vision module 107, and determine the movement of eyes’ pupils and the head movement of the driver 103 in the received one or more images 111. If the active state is determined to be one of the drowsiness state and the in-attentiveness state, the controller unit 109 may determine a severity level associated with the drowsiness state and the in-attentiveness state by tracking the movement of eyes’ pupils and the head movement in the one or more received images. Further, the controller unit 109 may generate the one or more CAN messages 113 indicating at least one of the drowsiness state and the in-attentiveness state. Also, the controller unit 109 may initiate generating the one or more visual contents 115 indicating at least one of the drowsiness state and the in-attentiveness state of the driver 103. The controller unit 109 may store the generated one or more visual contents for transmission to the electronic device 121 of the fleet owner 125.
At 202, the controller unit 109 may immediately transmit the one or more generated CAN messages 113 to the TCU 117 over the CAN bus upon determining at least one of the drowsiness state and the in-attentiveness state of the driver 103. At 203, the TCU 117 may immediately transmit the one or more received CAN messages 113 to the cloud server 119 over the cellular network. The cloud server 119 may transmit the CAN messages to the fleet owner 125 over the cellular network. At 204, the controller unit 109 may transmit one or more visual contents to the TCU 117 over the short-region wireless communication network. At 205, the TCU 117 may transmit the one or more visual contents to the cloud server 119 over the cellular network. Further, the cloud server 119 may transmit the one or more visual contents to the fleet owner 125.
Fig.3 shows a block diagram of a monitoring system for remote monitoring of a driver of a commercial vehicle 101 in accordance with some embodiments of the present disclosure.
In an embodiment, the monitoring system 105 may include an in-vehicle vision module 107, and a controller unit 109. In some implementations, the in-vehicle vision module 107 may comprise an image sensor 301 and a plurality of IR LEDs 303. The image sensor 301 may be configured to capturing the one or more images 111 of the driver 103. The plurality of IR LEDs 303 may be configured to illuminate face of the driver 103 during night time for capturing the one or more images 111 of the driver 103. As an example, the image sensor 301 may capture one or more bright pupil images and one or more dark pupil images subsequently such that captured one or more bright and dark pupil images are effectively same images, taken in distinct illumination conditions. According to standards defined by Society of Automotive Engineers (SAE), two IR LEDs 303 may be provided to efficiently capture eyes pupils of the drivers with different heights. A Field of Vision (FOV) of the image sensor 301 and the plurality of IR LEDs 303 in a cabin of the commercial vehicle 101 may be assessed, calibrated with respect to mounting of the image sensor 301, the plurality of IR LEDs 303, a position of a driver seat and various drivers as per Society of Automotive Engineers (SAE) standards. Further, the in-vehicle vision module 107 may be configured to capture a plurality of facial features and a plurality of head parameters at a predefined quality and a predefined speed in the one or more images 111. The in-vehicle vision module 107 may transmit the one or more images 111 including the plurality of facial features and a plurality of head parameters to the controller unit 109 for further processing.
In some embodiment, the controller unit 109 may include an I/O interface 305, a microcontroller 307, a memory 309, and modules. The I/O interface 305 may be communicatively coupled to the in-vehicle vision module 107, the TCU 117, the EMS/ECU, the in-vehicle infotainment system, the voice messaging system and the buzzer. The I/O interface 305 may be configured to receive the one or more images 111 of the driver 103 and one or more visual contents 115 associated with the driver 103 from the in-vehicle vision module 107. The I/O interface 305 may be configured to receive one or more CAN inputs from the TCU 117, and the EMS/ECU. Further, the I/O interface 305 may be configured to send control signal generated by the controller unit 109 to , the in-vehicle infotainment system, the voice messaging system and the buzzer over CAN bus.
In an embodiment, the microcontroller 307 may receive the one or more images 111 of the driver 103 driving the commercial vehicle 101 from the in-vehicle vision module 107 through the I/O interface 305. The microcontroller 307 may determine an active state of the driver 103 based on the received one or more images 111. Based on the determined active state, the microcontroller 307 may perform at least one of generating one or more CAN messages 113 associated with the active state, and generating one or more visual contents 115 associated with the driver 103 from the in-vehicle vision module 107 through the I/O interface 305. Further, the microcontroller 307 may transmit at least one of the one or more CAN messages 113 and the one or more visual contents 115 to the electronic device 121 operated by the fleet owner 125.
In the monitoring system 105, the memory 309 may store data received through the I/O interface 305, data processed by the microcontroller 307, and modules. In one embodiment, the data may include image data, head movement data, eyes pupils data, active state data, CAN message data, visual contents 115 data and other data. The image data may store the one or more images 111 of the driver 103 received through the I/O interface 305. The head movement data may store positions of the driver’s head at different instants of time, and tilt angle data associated with the positions of the head. The eyes pupils data may store pupil positions of the driver’s eyes at different instants of time, and eye gaze direction data associated with the pupil positions. The active state data may store the drowsiness state and the in-attentiveness state of the driver 103 and the severity levels associated with the drowsiness state and the in-attentiveness state. The CAN message data may store the CAN messages 113 associated with the active state, and the error CAN messages associated with the fault detection in at least one of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness. Further, the other data may store data including temporary data generated by the microcontroller 307, and modules for performing the various functions of the monitoring system 105.
In some embodiments, the data stored in the memory 309 may be processed by the modules of the monitoring system 105. In an example, the modules may be communicatively coupled to the microcontroller 307 configured in the monitoring system 105. The modules may be present outside the memory 309 and implemented as separate hardware. As used herein, the term modules may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, and memory 309 that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
In some embodiments, the modules may include, for example, a receiver module 311, a determination module 313, a generation module 315, a transceiver module 317, and other modules 319. The other modules 319 may be used to perform various miscellaneous functionalities of the monitoring system 105. It will be appreciated that aforementioned modules may be represented as a single module or a combination of different modules. Furthermore, a person of ordinary skill in the art will appreciate that in an implementation, the one or more modules may be stored in the memory 309, without limiting the scope of the disclosure. The said modules when configured with the functionality defined in the present disclosure will result in a novel hardware.
In an embodiment, the receiver module 311 may receive the one or more images 111 of the driver 103 driving the commercial vehicle 101 from the in-vehicle vision module 107 through the I/O interface 305 for remote monitoring of the driver 103 of the commercial vehicle 101. The received one or more images 111 may be one of grayscale images, RGB images and binary images. The receiver module 311 may send the one or more images 111 to the determination module 313 and the generation module 315 for further processing.
In an embodiment, the determination module 313 may determine the active state of the driver 103 based on the received one or more images 111. The determination module 313 may extract facial features and head parameters in the received one or more images 111. Based on the extracted facial features, and head parameters, the determination module 313 may determine movement of eyes’ pupils and head movement of the driver 103 in the received one or more images 111. Further, the determination module 313 may determine the active state as one of a drowsiness state and an in-attentiveness state based on the movement of eyes’ pupils and the head movement. In an embodiment, the controller unit 109 may determine a severity level associated with the active state based on the movement of eyes’ pupils and head movement. Particularly, the controller unit 109 may utilize one or more predefined ML models for determining the active state of the driver 103. As an example, upon receiving the one or more bright and dark pupil images, the determination module 313 may determine movement of eyes’ pupils by subtracting the one or more bright and dark pupil images from each other, and performing thresholding a difference obtained from the subtraction for generating binary images. From the binary images, the determination module 313 may detect eye gaze direction by determining an angle deviation between a visual axis and an optic axis. Based on the angle of deviation, the determination module 313 may determine whether the driver 103 is distracted or not, and degree of distraction. Similarly, from the received one or more images 111, determination module 313 may determine the positions of the driver’s head at different instants of time by tracking a change in the tilt angle of the driver’s head, and may determine that the driver 103 is drowsy. Further, the determination module 313 may determine a time period for which the driver’s eyes are closed to determine degree of drowsiness as one of high, medium and low.
In an embodiment, based on the determined active state, the generation module 315 may generate one or more CAN messages 113 indicating the active state of the driver 103. Each of the one or more CAN messages 113 may comprise a plurality of information associated with the driver 103 of the commercial vehicle 101. The plurality of information associated with the driver 103 may comprise identification information of the driver 103, the active state of the driver 103, the severity level associated with the active state, current location information associated with the commercial vehicle 101, and time stamp information associated with the active state. Further, the generation module 315 may generate the one or more visual contents (115) associated with the driver.
In an embodiment, the transceiver module 317 may transmit at least one of the one or more generated CAN messages 113 and the generated one or more visual contents 115 to an electronic device 121 operated by the fleet owner 125 associated with the commercial vehicle 101 through the TCU 117 and the cloud server 119. In particular, the transceiver module 317 may transmit the one or more CAN messages 113 to the TCU 117 over the CAN bus. Further, the transceiver module 317 may transmit the one or more visual contents 115 to the TCU 117 over a first wireless network. The TCU 117 may transmit the at least one of the one or more CAN messages 113 and the one or more visual contents 115 to the electronic device 121 operated by the fleet owner 125 through the cloud server 119 over a second wireless network.
In an embodiment, the other modules 319 may comprise a notification module. The notification module may receive the active state of the driver 103 and severity level associated with the active state from the determination module 313. Thereafter, the notification module may provide at least one of an audio alert and a notification to the driver 103 based on the active state and the severity level. Particularly, the notification module may send an activation signal to one of the in-built buzzer, and the voice messaging system for activating one of the in-built buzzer and the speaker of the voice messaging system. Thereafter, the audio alert may be provided from one of the in-built buzzer and the speaker for warning the driver 103. Alternatively, the notification module may send an activation signal to the in-vehicle infotainment system for displaying the notification for warning the driver 103. Also, the notification module may send an activation signal to the haptic system for actuating the plurality of the haptic sensors. Upon actuation, the plurality of the haptic sensors may generate a sense of touch applying a combination of force, vibration and motion sensations to the driver 103 for warning the driver 103.
In an embodiment, the other modules 319 may comprise a fault detection module. The fault detection module may detect a fault in at least one of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness. In case, the fault is detected, the one or more error CAN messages may be generated based on the detected fault. Further, the fault detection module may transmit the one or more generated error CAN messages indicating fault in at least one of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness to the TCU 117 over the CAN bus. Further, the TCU 117 may transmit the error CAN messages to the electronic device 121 operated by the fleet owner 125 via the cloud server 119 over the cellular network.
Fig.4a shows an exemplary scenario illustrating remote monitoring of a driver of a fleet in accordance with some embodiments of the present disclosure.
As illustrated in Fig.4a, a driver 103 may occupy a driver seat 403 inside a cabin of a truck. Thereafter, the driver 103 of the truck may initiate an ignition of an engine of the truck at 5.30 AM in the morning and initiate driving using a steering wheel 401. Upon ignition of the engine, an EMS/ECU associated with the truck may send a CAN input like vehicle speed, TCU associated with truck may send a CAN messages of Real Time Clock (RTC) and other electrical inputs like battery positive, battery negative (vehicle ground) and ignition signal to the monitoring system 105 associated with the truck. Upon receiving the CAN input, the controller unit 109 may activate the in-vehicle vision module 107 for capturing one or more images 111 of the driver 103 driving the truck. Further, the controller unit 109 may receive the images from the in-vehicle vision module 107. Initially, the driver 103 of the truck may be active, and attentive. Accordingly, drowsiness state or distraction state may not be detected from the received one or more images 111.
After driving for 4 hours of time duration, the driver 103 may feel drowsy. As a result, the driver 103 may keep nodding off while driving the truck. The in-vehicle vision module 107 may capture one or more images 111 of the driver 103 while nodding off, and may send to the controller unit 109 for further processing. Upon receiving the one or more images 111, the controller unit 109 may determine shrinking of the eyes’ pupils and nodding off in the sequence of images and may determine the drowsiness state of the driver 103 at 9.30 AM in the morning. Also, the controller unit 109 may determine that a time period for which the driver’s eyes are closed exceeds a first predefined time period, and may determine high level associated with the drowsiness state. Accordingly, the controller unit 109 may generate the CAN message indicating high level of drowsiness state of the driver 103 along with time stamp information as “9.30 AM on 24th APRIL 2021” and location information as “Aundh”. Also, the controller unit 109 may receive video frames/data indicating real-time nodding off of the driver 103 from the in-vehicle vision module 107. The controller unit 109 may transmit the CAN messages 113 to the TCU 117 over CAN bus, and the video frames/data to the TCU 117 over Wi-Fi network. Concurrently, the controller unit 109 may activate the in-built buzzer 405 to produce an intense sound alert at 3 minutes. Upon hearing the buzzer sound, the driver 103 may be immediately awake.
As an example, at 10 AM, the controller unit 109 may determine medium level of the drowsiness state by determining that the time period for which the driver’s eyes are closed exceeds a second predefined time period and falls below the first predefined time period. Accordingly, the controller unit 109 may generate the CAN message indicating medium level of drowsiness state of the driver 103 along with time stamp information as “10 AM on 24th APRIL 2021” and location information as “Pimpri”. Also, the controller unit 109 may receive video frames/data indicating real-time medium level of drowsiness state of the driver 103. The controller unit 109 may transmit the CAN messages 113 to the TCU 117 over CAN bus, and the video frames/data to the TCU 117 over Wi-Fi network. Concurrently, the controller unit 109 may activate the speaker of the voice messaging system 407 to provide voice messages or pre-recorded warning audio signal to the driver 103. The voice messages may instruct the driver 103 to take some refreshments. Based on the instruction in the voice messages, the driver 103 may perform the action accordingly.
As an example, at 12.32 PM, the controller unit 109 may determine low level of the drowsiness state by determining that the time period for which the driver’s eyes are closed falls below the second predefined time period. Accordingly, the controller unit 109 may generate the CAN message indicating low level of drowsiness state of the driver 103 along with time stamp information as “12.32 PM on 24th APRIL 2021” and location information as “Kothrud”. Also, the controller unit 109 may receive video frames/data indicating real-time low level of drowsiness state of the driver 103. The controller unit 109 may transmit the CAN messages 113 to the TCU 117 over CAN bus, and the video frames/data to the TCU 117 over Wi-Fi network. Concurrently, the controller unit 109 may activate the infotainment system 409 to provide visual notification on the HMI screen to warn the driver 103.
In the example, the TCU 117 of the truck may transmit the CAN messages 113 and associated video frames/data to the cloud server 119 over the cellular network for storing and transmitting to a smartphone 121 of the fleet owner 125. The CAN messages 113 and associated video frames/data may be stored in the cloud server 119 for future use by insurance agencies and government agencies. As an example, the cloud server 119 may determine that the driver 103 was awake during an accident event. This information may be utilized by the insurance companies to refund financial loss to the fleet owner 125 or the driver 103. The information may be utilized by the government agencies to decide that the driver 103 is not responsible for the accident etc.
Further, in a scenario, the commercial vehicle 101 of the fleet owner 125 may be scheduled to be driven to a remote location for delivery of a consignment. The commercial vehicle 101 may have to stay overnight at the remote location. In such scenario, a person, not authenticated to drive the commercial vehicle 101 may try to take away the commercial vehicle 101 without knowledge of the authenticated driver 103 or the fleet owner 125. Upon receiving the one or more images 111 of the person from the in-vehicle vision module 107, the monitoring system 105 may generate at least one of a CAN message 113 and a visual content 115 for driver authentication, and transmit to the TCU 117 over the CAN bus. Further, the TCU 117 may transmit the at least one of the CAN message 113 and the visual content 115 to the smartphone 121 of the fleet owner 125 through the cloud server 119 over the cellular network. Thereafter, the at least one of the CAN message 113 and the visual content may be displayed on a web-portal or application window in the display screen 123 of the smartphone 121. Based on the at least one of the displayed CAN message 113, and visual content 115, the fleet owner 125 may decide to authorize the person for driving the commercial vehicle 101 or send a request for remote immobilization of the commercial vehicle 101 to the monitoring system 105 through the cloud server 119 and the TCU 117. Upon receiving the immobilization request, the monitoring system 105 may send a signal to the EMS ECU or other ECUs in the commercial vehicle 101 to start a limp mode activation or fuel cut off. In this way, theft of the commercial vehicle 101 can be avoided.
Fig.4b-4d show exemplary illustrations of displaying a plurality of webpage screens on an electronic device of a fleet owner in accordance with some embodiments of the present disclosure.
In the example, a first webpage screen 411 may be displayed on the smartphone 121 of the fleet owner 125 upon receiving the CAN messages 113 from the cloud server 119. The first webpage screen 411 may be displayed on a “Fleet Edge” app installed in the smartphone 121. As shown in fig.4b, the first webpage screen 411 may include registration number of the driver 103, name of the registered driver 103, active state of the registered driver 103 along with the severity level, location information associated with the commercial vehicle 101, and timestamp information associated with the active state.
As an example, on 24th APRIL 2021, at 9.30 AM, “High Drowsiness” may be detected for a driver 103 named “Ravi Kumar” having registration no. “MH12VR2012” at “Aundh” location. Further, on 24th APRIL 2021, at 10 AM, “Mild Drowsiness” may be detected for the driver 103 at “Pimpri” location, and at 12.32 PM, “Low Drowsiness” may be detected for the driver 103 at “Kothrud” location. In this manner, the fleet owner 125 may easily track a real-time active state of the driver 103 on the first webpage screen 411, and may perform appropriate actions to wake up or warn the driver 103, thereby preventing the hazardous events. Additionally, the fleet owner 125 may inform a traffic control room, medical emergencies room by tracking the active state of the driver 103 of the commercial vehicle 101.
In the example, the fleet owner 125 may be navigated to a second webpage screen 413 in the “Fleet Edge” app on the smartphone 121, to request a live video of the driver 103. Accordingly, a request signal may be transmitted from the smartphone 121 to the TCU 117 of the commercial vehicle 101 through the cloud server 119 over the cellular network. In response to the request signal, the smartphone 121 may receive the requested live video of the driver 103 from the TCU 117 through the cloud server 119 over the cellular network. The received live video of the driver 103 may be displayed on the second webpage screen 413, as illustrated in fig.4c. In the example, in case of connectivity issue, the fleet owner 125 may be navigated to third webpage screen 415 in the “Fleet Edge” app on the smartphone 121, as illustrated in fig.4d. The third webpage screen 415 may display a detailed alert message. As an example, the third webpage screen 415 may inform the fleet owner 125 that the live video is not available due to poor network strength.
Fig. 5 shows a flow chart illustrating a method of remote monitoring a driver of a commercial vehicle in accordance with some embodiments of the present disclosure.
As shown in Fig. 5, the method 500 includes one or more blocks illustrating a method of remote monitoring a driver 103 of a commercial vehicle 101. The order in which the method 500 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 501, the method may include receiving, by a monitoring system 105, one or more images 111 of the driver 103 driving the commercial vehicle 101 from an in-vehicle vision module 107. Particularly, upon ignition of an engine of the commercial vehicle 101, an image sensor 301 of the in-vehicle vision module 107 may start capturing one or more images 111 of the driver 103. Further, monitoring system 105 may receive the captured images from the in-vehicle vision module 107 for further processing.
At block 503, the method may include determining, by the monitoring system 105, an active state of the driver 103 based on the received one or more images 111. To determine the active state of the driver 103, the monitoring system 105 may determine movement of eyes’ pupils and head movement of the driver 103 in the received one or more images 111. Further, the monitoring system 105 may determine the active state as one of a drowsiness state, and an in-attentiveness state based on the determined movement of eyes’ pupils and head movement. Here, the monitoring system 105 may utilize one or more predefined machine learning models to determine the active state of the driver 103. Also, the monitoring system 105 may determine a severity level associated with the active state based on the determined movement of eyes’ pupils and head movement.
At block 505, the method may include performing, by the monitoring system 105, at least one of: generating, based on the determined active state, one or more Controller Area Network (CAN) messages 113 associated with the active state, and generating, based on the determined active state, one or more visual contents 115 associated with the driver 103 from the in-vehicle vision module 107. Based on the determined active state, the monitoring system 105 may generate the one or more CAN messages 113 comprising a plurality of information associated with the driver 103 of the commercial vehicle 101, which may comprise identification information of the driver 103, the active state of the driver 103, the severity level associated with the active state, current location information associated with the commercial vehicle 101, and time stamp information associated with the active state. Additionally, the monitoring system 105 may provide at least one of an audio alert and a notification to the driver 103 based on the active state and the severity level associated with the active state, wherein the notification is provided on a Human Machine Interface (HMI) associated with the monitoring system 105.
At block 507, the method may include transmitting, by the monitoring system 105, at least one of the one or more generated CAN messages 113 and the generated one or more visual contents 115 to an electronic device 121 operated by a fleet owner 125 associated with the commercial vehicle 101. Particularly, the monitoring system 105 may transmit the one or more CAN messages 113 to a Telematics Control Unit (TCU) 117 communicatively coupled with the monitoring system 105 over a CAN bus. The monitoring system 105, also, may transmit the one or more visual contents 115 to the TCU 117 over a first wireless network. Further, the TCU 117 may transmit the at least one of the one or more CAN messages 113 and the one or more visual contents 115 to the electronic device 121 operated by the fleet owner 125 via a cloud server 119 over a second wireless network. In another embodiment, the monitoring system 105 may detect a fault in at least one of the in-vehicle vision module 107, the controller unit 109 and interconnecting wiring harness. Further, the monitoring system 105 may generate one or more error CAN messages based on the detected fault. Thereafter, the monitoring system 105 may transmit the one or more generated error CAN messages to the electronic device 121 operated by the fleet owner 125 via a TCU 117 of the commercial vehicle 101 and the cloud server 119.
Advantages of the embodiment of the present disclosure are illustrated herein.
In an embodiment, the present disclosure provides a method and a monitoring system for remote monitoring of a driver of a commercial vehicle.
In an embodiment, the present disclosure provides a monitoring system which generates CAN messages indicating a real-time active state of the driver and a severity level associated with the active state for transmitting to a fleet owner of the vehicle through a Telematics Control Unit (TCU) and a cloud server. Transmission of the CAN messages at different instants of time enable the fleet owner to easily track the real-time active state of the driver of the commercial vehicle and perform one or more necessary actions to prevent hazardous events.
In an embodiment, the present disclosure provides a monitoring system which provides in-vehicle audio alerts or visual alerts to the driver based on the active state of the driver and severity level associated with the active state. In other words, the monitoring system activates
an in-built buzzer, a voice messaging system, an in-vehicle infotainment system and a haptic system associated with the commercial vehicle for providing one of audio alerts and visual alerts to the driver in a timely manner to avoid occurrence of the hazardous events, and subsequent damages.
The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise. The enumerated listing of items does not imply that any or all the items are mutually exclusive, unless expressly specified otherwise.
The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral Numerals:
Reference Number Description
100 Architecture
101 Commercial vehicle
103 Driver
105 Monitoring system
107 Vision module
109 Controller unit
111 Images
113 Controller Area Network (CAN) messages
115 Visual contents
117 Telematics Control Unit (TCU)
119 Cloud server
121 Electronic device
123 Display screen
125 Fleet owner
301 Image sensor
303 Infrared (IR) Light Emitting Diodes (LED)
305 I/O interface
307 Microcontroller
309 Memory
311 Receiver module
313 Determination module
315 Generation module
317 Transceiver module
319 Other modules
401 Steering wheel
403 Driver seat
405 In-built buzzer
407 Voice messaging system
409 Infotainment system
411 First webpage screen
413 Second webpage screen
415 Third webpage screen
| # | Name | Date |
|---|---|---|
| 1 | 202121055145-STATEMENT OF UNDERTAKING (FORM 3) [29-11-2021(online)].pdf | 2021-11-29 |
| 2 | 202121055145-REQUEST FOR EXAMINATION (FORM-18) [29-11-2021(online)].pdf | 2021-11-29 |
| 3 | 202121055145-POWER OF AUTHORITY [29-11-2021(online)].pdf | 2021-11-29 |
| 4 | 202121055145-FORM-8 [29-11-2021(online)].pdf | 2021-11-29 |
| 5 | 202121055145-FORM 18 [29-11-2021(online)].pdf | 2021-11-29 |
| 6 | 202121055145-FORM 1 [29-11-2021(online)].pdf | 2021-11-29 |
| 7 | 202121055145-DRAWINGS [29-11-2021(online)].pdf | 2021-11-29 |
| 8 | 202121055145-DECLARATION OF INVENTORSHIP (FORM 5) [29-11-2021(online)].pdf | 2021-11-29 |
| 9 | 202121055145-COMPLETE SPECIFICATION [29-11-2021(online)].pdf | 2021-11-29 |
| 10 | Abstract1.jpg | 2022-01-03 |
| 11 | 202121055145-Proof of Right [31-01-2022(online)].pdf | 2022-01-31 |
| 12 | 202121055145-FER.pdf | 2023-10-19 |
| 13 | 202121055145-OTHERS [18-04-2024(online)].pdf | 2024-04-18 |
| 14 | 202121055145-FER_SER_REPLY [18-04-2024(online)].pdf | 2024-04-18 |
| 15 | 202121055145-CLAIMS [18-04-2024(online)].pdf | 2024-04-18 |
| 1 | SearchHistoryE_18-10-2023.pdf |