Sign In to Follow Application
View All Documents & Correspondence

A System For Detecting Objects In Front Of A Moving Body

Abstract: A system 100 and method to detect objects 102 in front of a moving body 104 include an image sensor 106, and a server 108 communicatively coupled with the image sensor 106. The server 108 includes one or more processing modules using deep learning models where the at least one processor is configured to receive one or more images captured by the image sensor 106 to perform pre-processing of the received image and find one or more region of interest to detects and classify one or more objects 102 for feature extractions. The system 100 display images on a display through an image rendering module 118, configured for a driver 128 in cabin 126 and generate light and sound warning if the object 102 is identified as wild or dangerous and likely to have larger impact in case of collision.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 May 2023
Publication Number
46/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Bharat Electronics Limited
Corporate Office, Outer Ring Road, Nagavara, Bangalore - 560045, Karnataka, India.

Inventors

1. GULLIPALLI, Rajesh Kumar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
2. MITTAL, Virendrakumar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
3. JETTIPALLI, Jyotheswar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates generally to the technical field of target detections. In particular, the disclosure is a system and a method for detecting animals in front of a moving train to avoid collision during day and night movement.

BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] There have been numerous accidents occur each year globally where collision of innocent animals with moving vehicles takes place on roads, railway track or in a safari while crossing the road or path of the moving body. This is danger to animal life and the life of the drivers and passengers of the moving body especially when speed of either party or both is high. These incidents may occur in day time as well as during night time where night collisions are prominent due to search of food by the wild animals.
[0004] To control or avoid, many systems have been devised in the past, for example changing the path, putting a barbered fencing etc. but in the age of technology, such methods are irrelevant. Hence, there have been various disclosures have come where innovative system have been disclosed to detect animals and warn the driver by giving audio, and video warning as well as to the animals coming near the moving objects.
[0005] Patent Document US20140074359A1 discloses a system and method for animal crash avoidance for a vehicle include infrared camera and other sensors, the safety system may detect an object in an area around the vehicle. A controller analyses the data from the sensors and cameras and determines if the object is an animal. If the object determined is an animal the controller initiates a response to avoid or minimize the chance of impacting the animal.
[0006] Another Patent Document US20170161569A1 for system and method of object detection where an image is processed associated with an image of scene. The scene includes a road region. The method further includes detecting the road region based on image data and determining a subject of the image data. The subset excludes at least a portion of the image data corresponding to the road region. The method further includes performing an object detection operation on the subset of the image data to detect an object.
[0007] Yet another Patent Document JP20099204570A discloses navigation system responding to wild animal to prevent bringing a wild animal into contact of a vehicle include a centre server for holding information of distribution data of the wild animals while putting it into database, while performing a distribution service and an on-board navigation device for guiding the vehicle by receiving the wild animal distribution from the centre server. When a wild animal travels in an area where there is a possibility of jumping out, a warning signal that warns the wild animal is generated to prevent contact.
[0008] While the cited references disclose different types of methods based on different methods where one or more drawbacks of computational complexity that restricts the purpose, there is a possibility of providing a more efficient solution to the above mentioned problems for animal detection and avoid collision.
[0009] There is, therefore, a need to provide a simple, improved, and cost-effective system and method that eliminates abovementioned limitations and problems, and which is easy to execute.

OBJECTS OF THE INVENTION
[0010] A general object of the present disclosure is to provide a simple, improved, and cost-effective system and method that eliminates computation problems and easy to execute.
[0011] It is an object of the present disclosure is to provide detection of animals in front of a moving train to avoid collision.
[0012] Another object of the present disclosure is to provide a real time video of the front view for a moving train during day as well as during night.
[0013] Another object of the present disclosure is to provide audio and visual warning to the loco pilot.
[0014] Another object of the present disclosure is to provide a graphics processing unit using artificial intelligence with deep learning models for providing outputs.

SUMMARY
[0015] The present disclosure relates generally to the technical field of target detections. In particular, the disclosure is a system and a method for detecting animals in front of a moving train to avoid collision during day and night movement.
[0016] Aspect of the present disclosure is a system to detect objects in the field of view of a moving body include an image sensor, and a server communicatively coupled with the image sensor including one or more processing modules using deep learning models, and at least one database, operatively coupled with memory storing instructions which, when executed, cause the at least one processor to receive one or more images captured by the image sensor in the field of view, perform pre-processing of the received image to find one or more region of interest, detects and classify one or more objects within the regions of interest for feature extractions, display all regions of interest on a display unit configured for a driver in a cabin of the moving body, and generate visual and sound warning for the driver of the moving body if the identified object is dangerous and likely to have larger impact in case of collision.
[0017] In an aspect, the objects under detection are one or more animals likely to be present in front of the moving body and the moving body is a train.
[0018] In an aspect, the image sensor is configured in front of the engine to have greatest possible field of view.
[0019] In an aspect, the image sensor selected is an infrared electro-optic camera sensor to capture images in front of the moving body within the field of view.
[0020] In an aspect, the display unit is an interactive graphical user interface (GUI) display device streaming real-time images captured by the image sensor and the display unit includes a plurality of buttons for modes of operations.
[0021] In an aspect, the plurality of modes of operation includes a record mode, a detect mode, browse mode and a go live mode.
[0022] In an aspect, the one or more modules are an image acquisition module, pre-processing module, detection and classification module, and an image rendering module.
[0023] Another aspect of the disclosure is a method detecting objects in the field of view of a moving body including steps for receiving one or more images captured by the image sensor in the field of view, performing pre-processing of the received image to find one or more region of interest; detecting and classify one or more objects within the regions of interest for feature extractions; displaying all regions of interest on a display unit configured for a driver in a cabin of the moving body; and generating visual and sound warning for the driver of the moving body if the identified object is dangerous and likely to have larger impact in case of collision.
[0024] In an aspect, the features of extraction include number of animals, types of animals, proximity, and their course of movements.
[0025] In an aspect, the deep learning models with artificial intelligence is used for extraction of the features of objects.
[0026] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0027] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0028] FIG. 1 illustrates a system using artificial intelligence to detect objects in front of a moving body, in accordance with embodiments of the present disclosure.
[0029] FIG. 2 illustrates a block diagram for an image rendering module according to FIG. 1, in accordance with embodiment of the present disclosure.
[0030] FIG. 3 illustrates flow chart for the display of video on a screen in record mode of the image rendering module, in accordance with embodiments of the present disclosure.
[0031] FIG. 4 illustrates flow chart for the display of video detection mode of the image rendering module, in accordance with embodiments of the present disclosure.
[0032] FIG. 5 illustrates flow chart for the display of video on a screen in browse mode of the image rendering module, in accordance with embodiments of the present disclosure.
[0033] FIG. 6 illustrate method flow diagram, in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION
[0034] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0035] The present disclosure relates generally to the technical field of target detections. In particular, the disclosure is a system and a method for detecting animals in front of a moving train to avoid collision during day and night movement.
[0036] Embodiments explained herein relate to a system and method to detect animals in front of a moving train to avoid collision during day and night movement, include an image sensor, and a server communicatively coupled with the image sensor. The server include one or more processing modules using deep learning models where the at least one processor is configured to receive one or more images captured by the image sensor to perform pre-processing of the received image and find one or more region of interest to detects and classify one or more animals based features for extractions. The system displays video on a display unit configured for a loco pilot in cabin and generate visual and sound warning if the animal is identified as dangerous and likely to have larger impact in case of collision.
[0037] Referring to FIG. 1 where a system 100 using artificial intelligence to detect objects 102 in front of a moving body 104 is shown.
[0038] In an aspect, the objects 102 under detection are one or more animals (referred to as animals herein after) including wild animals likely to be present in front of the moving body 104 and the moving body 104 is a moving train (referred to as train herein after) running between two stations and passing through a region where wild animals 102 are present. The region is such, where animals 102 are moving and crossing the rail lines may collides with the train 104. These incidents may occur during day time as well as during night time where night collisions are prominent due to search of food by the animals 102.
[0039] The system 100 for detection of animals 102 includes an image sensor 106 mounted on engine 126 of the train 104, and a server 108 communicatively coupled with the image sensor 106. The image sensor 106 is configured in front of the engine to have greatest possible field of view (FOV) shown as area in dotted lines.
[0040] In an embodiment, the image sensor 106 selected is an infrared electro-optic camera sensor to capture real-time images in front of the train 104 within the field of view. The image sensor 106 configured with the system 100 is an infrared electro-optic camera sensor. When the one or more animals are captured by the image sensor 106 and sent to the server 108, pre-processing module remove noise using a noise reduction circuit 114-1 and the enhanced video which is sent to image rendering module 116 where driver 128 (referred to as loco pilot herein after) can select modes of operations depending upon requirement.
[0041] In an embodiment, the display unit of image rendering module 202 is an interactive graphical user interface (GUI) display unit streaming real-time images captured by the image sensor 106. The display unit is located near the loco pilot 128 seat in the engine 126. The display unit includes a plurality of buttons for modes of operations. The plurality of modes of operation includes a record mode, a detect mode, browse mode and a go live mode.
[0042] In an embodiment, whenever a wild animal 102 is detected in the FOV and upon detection of wild animals 102, an audio warning unit 122 give alarm to alert loco pilot 128 and as well as a visual alert on screen displayed so that the loco pilot 128 can take an appropriate action to avoid probable collision with the wild animals 102. The visual alert displayed on screen is preferably in red colour and the audio warning unit 122 and the visual alert on screen are configured in the cabin and activated simultaneously upon detection of the animals.
[0043] In an embodiment, the images from the image sensor are received by the server 108. Server 108 is a GPU (graphics processing unit) server includes specialized processors to accelerate processing of video information. The GPU server can process multiple images simultaneously making them useful for deep learning where parallel processing of images for detection of animals and classifying them with their categories and rendering of high-definition video. GPU can train deep learning neural networks for artificial intelligence application. The server 108 includes one or more modules and in the disclosure, the server 108 have an image acquisition module 112, pre-processing module 114, detection and classification module 116, and an image rendering module 118.
[0044] In an embodiment, classification and processing of image information is the prerequisite for in-depth mining of image information. Therefore, to recognize and classify the image, an image acquisition module 112 obtain the original image information, then reduce the noise to clean the image data and extract features, including number of animals 102, types of animals 102, proximity, and their likely course of movements and boundary image. Finally, uses the deep learning algorithm to realize the image classification of detected animals. Deep learning-based image classification algorithms performs in real-time to alert loco-pilot 128. Considering that the image classification algorithm needs to meet certain real-time performance, the corresponding image classification model is established and optimized.
[0045] In an embodiment, deep learning is a bench of machine learning based on artificial neural networks. The deep learning model is a multilayer deep neural network model which performs data interpretation and integrates several layers of features to produce prediction outcomes. Convolutional Neural Networks (CNNs) is the most popular neural network model being used for image classification problem. This is a weighted sum of the pixel values of the image, as the window slides across the whole image and produces another image of the same size depending on the convolution. The convolution process throughout an image is a weight matrix model.
[0046] FIG. 2 illustrates a block diagram for an image rendering module 118. The image rendering module 118 includes a display 202. The display 202 is an interactive graphical user interface (GUI) display streaming real-time images captured by the image sensor 106. The display unit 202 includes a plurality of buttons for modes of operations. In an aspect, the plurality of modes of operation includes a record mode 204, a detect mode 206, browse mode 208 and a Go live mode 210.
[0047] In an embodiment, the loco pilot 128 can watch the video stream shown on display 202 window of the image rendering module 118. The video can be recorded by pressing the record button 204 to start the record mode and by pressing the record button again, the record mode stops. The video that is shown in the display 202 starts being recorded and the recording is saved and stored on an external storage device 120 until the loco pilot 128/other operator clicks stop button and when done, the video on display 202 is no longer being recorded and the recorded video is saved on the external storage 120. The saved video can include super imposed detection boxes based on whether the detect button 206 selecting the detect mode 206 is on.
[0048] In an embodiment, the loco pilot 128 can also select an imported video or image file and can locate and recognize any wild animals 102 that are present in it, can select the go live mode 210 to switch from displaying detections from imported videos to a live video stream.
[0049] In an embodiment, the detect mode 206 button start by pressing it and by pressing the detect button again, the detect mode stops. The detect mode 206 is used to overlay bounding boxes on the incoming video stream and they are displayed in the display window. The same detect button 206 is used to see bounding boxes over all wild animals present in the incoming video feed and also to remove those imposed bounding boxes.
[0050] In an embodiment, the browse mode 208 will start by pressing browse mode button 208 and by pressing the browse button 208 again, the browse mode stops. This mode 208 is used to load videos/images, the location and identification of wild animals 102 is detected and superimposed video with its detections is displayed in the display window.
[0051] In an embodiment, the Go live button 210 is used to switch the display window from showing detections from the imported video stream to live video stream. The loco pilot 128 can select the go live button 210 to switch from displaying detections from imported videos to a live video stream.
[0052] FIG. 3 illustrates flow chart 300 for the display of video on a display 202 in record mode 204 of the image rendering module 118. The image 302 is acquired and the loco pilot 128 press the record button 204 as per step 304, the video that is shown on the display 202 is recorded and saved in the external storage 120 as per the step 306. The saved video can include detections based on whether the detect button is on. Again, if the loco pilot 128 press record button 204, this will stop the recording of the video and display images will not be recorded as per step 312. And in case, the loco pilot 128 do not press the record button 204, the record mode 204 will continue as per the step 306.
[0053] In an embodiment, in case, the loco pilot 128 does not press the record button 204, then according to the step 308, the real-time video stream will be shown on the display 202 and the video will not be recorded as per step 308.
[0054] FIG. 4 illustrates flow chart 400 for the display of video in detection mode 206 of the image rendering module 118. To put the detection mode 206 on, the loco pilot 128 need to select detection mode 206 by pressing the detection button as per the step 402 and the image 302 will be processed in the detection and classification module 116. The detection and classification module 116 captures the presence of the wild animals 102 and then classifies them. The positive detection will initiate the audio warning as well as a visual warning upon detection of animals 102 as per the step 404.
[0055] In an embodiment, according to the step 406, on display 202, the real-time video will be superimposed with obtained boundary boxes, and classified animal 102 name will be shown on the top of the boundary box.
[0056] In an embodiment, now, if the loco pilot 128 presses the detect button 206 again, the detect stop will be initiated. This state will be similar to the state, in case, the loco pilot 128 does not pressed the detect button 206 initially at all, then according to the step 410, the real-time video stream will be shown on the display 202 and the no super imposed bounding boxes will appear.
[0057] FIG. 5 illustrates flow chart 500 for the display of video in browse mode 208 of the image rendering module 118. To put the browse mode 208 on, the loco pilot 128 need to select browse mode 208 by pressing the browse button as per the step 502 and as per the step 502. This will initiate import of a video or an image file to determine the presence of a wild animal 102 in the video/image and get its bounding box coordinates according to the step 504. And in case the loco pilot 128 do not select the browse mode 208, the running video streaming will continue with or without detection mode 206 or with or without record mode 204 as per the step 506.
[0058] In an embodiment, if the loco pilot 128 selects Go live mode 210 as per step 508, the display 202 quickly changes from the earlier mode selected for example, the detection mode 206 showing imported video/image files and starts showing live video as per step 510. And if no, then as per the step 512, complete video file with superimposed detection is shown on the display unit and real-time video streaming begins after completion of the imported video file.
[0059] FIG. 6 illustrate method flow diagram 600. The method 600 for detecting objects 102 in the field of view of a moving body 104. As mentioned above, the objects 102 are animals including wild animals and the moving body 104 is a train going through the region where animals, including wild animals are roaming freely. They may come on the rail line, sit walk, and cross. The train 104 includes an image sensor 106 mounted in front of the engine capturing real time images in the field of view. The captured image data are passed to an image acquisition module 112 which is part of a GPU server 108. The server 108 communicatively coupled with the image sensor 106 and includes one or more processing modules for receiving one or more images 302 captured by the image sensor 106 in the field of view according to step 602.
[0060] In an embodiment, the one or more processors, according to the step 604 performing pre-processing of the received image to find one or more region of interest. Further as per step 606 detecting and classify one or more objects 102 within the regions of interest for feature extractions. The feature extractions includes number of animals 102, types of animals 102, proximity, and their course of movements. The bounding boxes are displayed on the display 202 indicating identified animal 102 that can be wild or dangerous and is distinguish from the other animals 102.
[0061] In an embodiment, according to the step 608, the displaying of all regions of interest on a display 202 configured for the loco pilot 128 in cabin 126 of the train 104 as per the mode selected by the loco pilot 128. Finally, as per the step 610, generating audio warning and visual warning for the loco pilot 128 in case the animals 102 are identified as wild or dangerous and likely to have larger impact in case of collision.
[0062] Thus, the present disclosure provides a system 100 and method 600 for detection of the animals 102 in front of the train 104 and alert loco pilot 128 to decide further course of action. The system and method can be implemented to other moving objects like car, bus and other commercial vehicle and will be covered under the disclosure.
[0063] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE INVENTION
[0064] A general advantage of the present disclosure provides a simple, improved, and cost-effective system and method that eliminates problems and easy to execute.
[0065] The present disclosure provides detection of animals in front of a moving train to avoid collision.
[0066] The present disclosure provides a real time video of the front view for a moving train during day as well as during night.
[0067] The present disclosure provides audio and visual warning to the loco pilot.
[0068] The present disclosure provides a graphics processing unit using artificial intelligence with deep learning models for providing outputs.
, Claims:1. A system (100) to detect objects (102) in the field of view of a moving body (104), the system (100) comprising:
an image sensor (106); and
a server (108) communicatively coupled with the image sensor (106) comprising one or more processing modules using deep learning models, and at least one database (110), operatively coupled with memory storing instructions which, when executed, cause at least one processor to:
receive one or more images (302) captured by the image sensor (106) in the field of view;
perform pre-processing of the received image (302) to find one or more region of interest;
detects and classify one or more objects (102) within the regions of interest for feature extractions;
display all regions of interest with detected objects (102) on a display (202) configured for a driver (128) in a cabin (126) of the moving body (104); and
generate visual and sound warning for the driver (128) of the moving body (104) if the identified object (102) is dangerous and likely to have larger impact in case of collision.

2. The system as claimed in claim 1, wherein the objects (102) under detection are one or more animals likely to be present in front of the moving body (104), wherein the moving body (104) is a train.

3. The system as claimed in claim 1, wherein the image sensor (106) is configured in front of the engine to have greatest possible field of view.

4. The system as claimed in claim 1, wherein the image sensor (106) selected is an infrared electro-optic camera sensor to capture images in front of the moving body (104) within the field of view.

5. The system as claimed in claim 1, wherein the display (202) is an interactive graphical user interface (GUI) display device streaming real-time images captured by the image sensor (106), wherein the display (202) comprising a plurality of buttons for modes of operations.

6. The system as claimed in claim 5, wherein the plurality of modes of operation includes a record mode (204), a detect mode (206), browse mode (208) and a go live mode (210).

7. The system as claimed in claim 1, wherein the one or more modules are an image acquisition module (112), pre-processing module (114), detection and classification module (116), and an image rendering module (118).

8. A method (600) detecting objects (102) in the field of view of a moving body (104), the method (600) comprising steps for:
receiving one or more images (302) captured by the image sensor (106) in the field of view;
performing pre-processing of the received image (302) to find one or more region of interest;
detecting and classify one or more objects (102) within the regions of interest for feature extractions;
displaying all regions of interest with detected objects (102) on a display (202) configured for a driver (128) in a cabin (126) of the moving body (104); and
generating visual and sound warning for the driver (128) of the moving body (104) if the identified object (102) is dangerous and likely to have larger impact in case of collision.
9. The method as claimed in claim 8, wherein the features of extraction includes number of animals (102), types of animals (102), proximity, and their course of movements.

10. The method as claimed in claim 9, wherein the deep learning models with artificial intelligence is used for and extraction of the features of objects (102).

Documents

Application Documents

# Name Date
1 202341032822-STATEMENT OF UNDERTAKING (FORM 3) [09-05-2023(online)].pdf 2023-05-09
2 202341032822-POWER OF AUTHORITY [09-05-2023(online)].pdf 2023-05-09
3 202341032822-FORM 1 [09-05-2023(online)].pdf 2023-05-09
4 202341032822-DRAWINGS [09-05-2023(online)].pdf 2023-05-09
5 202341032822-DECLARATION OF INVENTORSHIP (FORM 5) [09-05-2023(online)].pdf 2023-05-09
6 202341032822-COMPLETE SPECIFICATION [09-05-2023(online)].pdf 2023-05-09
7 202341032822-ENDORSEMENT BY INVENTORS [09-06-2023(online)].pdf 2023-06-09
8 202341032822-Proof of Right [19-06-2023(online)].pdf 2023-06-19
9 202341032822-POA [04-10-2024(online)].pdf 2024-10-04
10 202341032822-FORM 13 [04-10-2024(online)].pdf 2024-10-04
11 202341032822-AMENDED DOCUMENTS [04-10-2024(online)].pdf 2024-10-04
12 202341032822-Response to office action [01-11-2024(online)].pdf 2024-11-01