Sign In to Follow Application
View All Documents & Correspondence

System And Method For Object Detection And Ranging

Abstract: The present disclosure relates to a system and a method for object detection and ranging. The method includes obtaining an image of a target captured by an image sensor, enhancing the captured image to remove blur, estimating a size and a range of the target based on a number of pixels associated with the target, changing an angle of capture associated with the image sensor by sending a feedback signal, based on a confidence score associated with the estimated range, to an elevation controller of a gimbal, confirming the estimated range, detecting a presence of the target based on the estimation and displaying the detected target.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
08 June 2023
Publication Number
50/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Bharat Electronics Limited
Corporate Office, Outer Ring Road, Nagavara, Bangalore - 560045, Karnataka, India.

Inventors

1. MANDAL, Subhasis
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
2. MITTAL, Virendrakumar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates, in general, to surveillance networks, and more specifically, relates to systems and methods for artificial intelligence (AI0-based detection and ranging of target objects.

BACKGROUND
[0002] Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art.
[0003] In several situations, there is a need to detect an object or a target and estimate a range to the object of interest or target. For example, if for some reason the instrument landing system (ILS) of an aircraft becomes disabled, it is highly advantageous to know the distance from the entrance to the runway to the aircraft to assist in landing. In case of automotive travel, it would be highly desirable to always be aware of the distance between vehicles to avoid collisions. An air-to-air missile can use estimated range to the target to determine when to detonate its warhead.
[0004] There are various techniques for estimating the object range. Some use artificial intelligence (AI). In general, for AI-based estimation, a device used for detection and ranging requires huge computational power as well as memory requirement with a delayed output.
[0005] Therefore, it is desired to overcome the drawbacks, shortcomings, and limitations associated with existing solutions, and develop a more efficient detection, tracking, and estimation system.

OBJECTS OF THE PRESENT DISCLOSURE
[0006] An object of the present disclosure is to employ an artificial intelligence (AI)-based system for passive detection and ranging of target object.
[0007] Another object of the present disclosure is to improve accuracy and reduce memory and processing requirements associated with the AI-based system for detecting and ranging target object.
[0008] The other objects and advantages of the present disclosure will be apparent from the following description when read in conjunction with the accompanying drawings, which are incorporated for illustration of the preferred embodiments of the present disclosure and are not intended to limit the scope thereof.

SUMMARY
[0009] In an aspect of the present disclosure relates to a system for object detection and ranging. The system includes one or more processors and a memory operatively coupled to the one or more processors, wherein the memory comprises processor-executable instructions, which on execution, cause the one or more processors to obtain an image of a target captured by an image sensor, enhance the captured image to remove blur, estimate a size and range of the target based on a number of pixels associated with the target, detect a presence of the target based on the estimation and display the detected target. In some embodiments, the one or more processors are configured to change an angle of capture associated with the image sensor by sending a feedback signal, based on a confidence score associated with the estimated range, to an elevation controller of a gimbal and confirm the estimated range. The image sensor comprises a day and night camera for capturing the target present in a field of view (FOV) associated with the camera and is placed on the gimbal, wherein the gimbal is mounted on an unmanned aerial vehicle (UAV).
[0010] In another aspect of the present disclosure relates to a method for object detection and ranging. The method includes obtaining an image of a target captured by an image sensor, enhancing the captured image to remove blur, estimating a size and range of the target based on a number of pixels associated with the target, detecting a presence of the target based on the estimation and displaying the detected target. In some embodiments, the method further includes rechecking a confidence score associated with the detected range, changing an angle of capture associated with the image sensor by sending feedback signal to an elevation controller of a gimbal and confirming the estimated range.
[0011] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The following drawings form part of the present specification and are included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0013] FIG. 1 illustrates an exemplary architecture (100) in which or with which an artificial intelligence (AI)-enabled passive ranging and detection is implemented, in accordance with an embodiment of the present disclosure.
[0014] FIG. 2 illustrates an exemplary block diagram (200) of the system (110) for implementing the AI-enabled passive ranging and detection of targets, in accordance with an embodiment of the present disclosure.
[0015] FIG. 3 illustrates an exemplary flow diagram (300) of the AI-enabled passive ranging and detection system, in accordance with an embodiment of the present disclosure.
[0016] FIG. 4 illustrates an exemplary flow chart of a method (400) for performing AI-enabled passive detection and ranging, in accordance with an embodiment of the present disclosure.
[0017] FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented.
[0018] The foregoing shall be more apparent from the following more detailed description of the invention.

DETAILED DESCRIPTION
[0019] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0020] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0021] The present disclosure relates to a surveillance network, and more specifically, relates to systems and methods for passive detection and ranging of targets or objects using Artificial Intelligence (AI). In an embodiment of the present disclosure, the system for object ranging and detection may include a day and night camera. The day and night camera may be capable of capturing an image of the required target or object during bright day time as well as less or no illuminated night time. The day and night camera forms a sensor assembly and may be deployed on a gimbal. In accordance with an embodiment, the gimbal may be mounted on an unmanned aerial vehicle (UAV). The gimbal is a pivoted support that may permit rotation of any element placed on it about an axis, for example, the day and night camera is placed on the gimbal. The gimbal assembly may be used to scan at a very wide-angle range, for example, 180 degrees for estimating the range of the object. An elevation controller present in the gimbal may be used to control the scanning angle. Further, an AI engine in the ranging and detection system may be trained to detect and classify different class of objects such as, without limitations, person, building, vehicle, railway line, etc., to detect and range such objects. In accordance with an embodiment of the present disclosure, the AI engine/model may be trained to estimate a number of pixels covered by an object in a given field of view (FOV) to provide the object range. Training dataset may be created for the same and stored in a database.
[0022] In accordance with an embodiment of the present disclosure, the system for detecting and ranging a target or an object may initially capture an image of the target or object in a given FOV, wherein the FOV may include the FOV associated with the day and night camera. The captured image may then be enhanced to remove any blur or haziness present in the image. Then the contrast of the captured image is computed prior to sent to the AI engine for object detection and ranging. The AI engine/model may be trained with images having contrast of more than 20%, and having a size of at least 128 x 128 pixels. The AI model then estimate the size of the target or object based on the number of pixels occupied by object. After that , the triangle similarity is used to calculate the distance D′ of the object to the camera as D ′ = (W × F )/P where W is the known width/height of the object, F is camera focal length, and P is measured object width obtained from the object detection model. To find the optimum distance, the AI based range estimation is performed multiple times by changing the camera FOV through the gimbal elevation controller. The recheck module, for rechecking the estimated range confidence, sends a feedback signal to the gimbal elevation controller to change an angle of elevation to capture the target. The elevation angle is changed in a fixed interval, for example, five steps, to enable capturing of the target with a different FOV. The changing of the elevation angle enables rechecking the range estimation and thereby ascertain the exact range of the target.
[0023] The disclosed solution facilitates in reducing processing and memory requirements as well as improves the accuracy in detection and ranging.
[0024] Various embodiments of the present disclosure will be explained with reference to FIGs. 1-5.
[0025] FIG. 1 illustrates an exemplary architecture (100) in which or with which an AI-enabled passive ranging and detection may be implemented, in accordance with an embodiment of the present disclosure.
[0026] In particular, the exemplary architecture (100) relates to a surveillance network. The network (100) may include an image sensor (102) for capturing an image of a target or an object (106). The image sensor (102) may include any device capable of capturing an image of an object. For example, without limitations, the image sensor (102) may include a day and night camera to capture images of the target (106) which comes within its field of view (FOV). The image sensor (102) may be placed on a gimbal (not shown), wherein the gimbal may be mounted on a UAV (not shown). The image captured by the image sensor (102) may be sent to a system (110) for further processing. The system (110) may include an AI engine (104) to detect and estimate a size and a range of the target (106). In an embodiment, the AI engine (104) may include a set of pre-trained data related to different types of object. The AI engine (104) may detect the size of the target (106) based on a number of pixels occupied by the image of the target (106). The detected image and its range may be displayed on a display device (108) for perusal by any user such as, but not limited to, a surveillance officer.
[0027] A person of ordinary skill in the art will appreciate that the exemplary architecture (100) may be modular and flexible to accommodate any kind of changes in the architecture (100).
[0028] FIG. 2 illustrates an exemplary block diagram (200) of the system (110) for implementing AI-enabled passive ranging and detection of targets, in accordance with an embodiment of the present disclosure.
[0029] In an embodiment, the system (110) may comprise one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (110). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as random access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM,) flash memory, and the like.
[0030] In an embodiment, the system (110) may include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the system (110). The interface(s) (206) may also provide a communication pathway for one or more components of the system (110). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210).
[0031] Referring to FIG. 2, the database (210) may store the pre-trained data set, i.e., a set of data associated with the images of various targets or objects such as, but not limited to, person, building, vehicle, railway line, etc.
[0032] In an embodiment, the processing engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (110) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (110) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry. The processing engine (208) may include one or more engines selected from any of an acquisition engine (212), an AI engine (214), a display module (216), and other engines (218). It may be appreciated that the AI engine (214) of FIG. 2 may be similar to the AI engine (104) of FIG. 1 in its functionality.
[0033] In an embodiment, the one or more processor(s) (202) may cause the acquisition engine (212) to extract the set of data parameters from the database (210) for enabling object detection and range estimation by the AI engine (214) to estimate a range of the target or object (106) as shown in FIG. 1, based on the pre-trained data set stored in the database (210). The AI engine (214) may utilise one or more machine learning models to pre-process the set of data parameters. In an embodiment, based on the pre-processing, the one or more processor(s) (202) may cause the AI engine (214) to detect the target object and estimate the range of the target object.
[0034] A person of ordinary skill in the art will appreciate that the exemplary block diagram (200) may be modular and flexible to accommodate any kind of changes in the system (110). In an embodiment, the data may get collected meticulously and deposited in a cloud-based data lake to be processed to extract actionable insights. Therefore, the aspect of predictive maintenance can be accomplished.
[0035] FIG. 3 illustrates an exemplary flow diagram (300) of the AI-enabled passive ranging and detection system, in accordance with an embodiment of the present disclosure.
[0036] In FIG. 3, a gimbal elevation controller (302), a video sensor (304), a video frame grabber module (306), an AI model (308), a recheck module (310), and a video display module (312) are shown. In an embodiment, the video sensor (304) may include any video or image capturing device, for example a day and night camera, for capturing an image of a target. The captured image is passed through the video frame grabber module (306), wherein the video frame grabber module (306) may enhance the captured image to remove haziness and blur in the captured image. The enhanced image is then passed to the AI module (308). The AI module (308) includes pre-trained data related to images of various objects. The captured image is then analysed with respect to the pre-trained data set to determine a size of the target. The target size is estimated based on the number of pixels occupied by the image of the target or object. The AI module (308) further estimates the target range using the triangle similarity where the distance D′ of the object to the camera is calculated as D ′ = (W × F)/P where W is the known width/height of the object, F is camera focal length, and P is measured object width obtained from the object detection model. The recheck module (310), for rechecking the estimated range confidence, sends a feedback signal to the gimbal elevation controller (302) to change an angle of elevation to capture the target (106) as shown in FIG. 1. The elevation angle is changed in a fixed interval, for example, five steps, to enable capturing of the target with a different FOV. The changing of the elevation angle enables rechecking the range estimation and thereby ascertain the exact range of the target. Upon confirmation of range, the detected object is displayed on a display device by the video display module (312). The display may be used by any person such as a surveillance officer to check on the objects entering the surveillance network (100) as shown in FIG. 1.
[0037] FIG. 4 illustrates an exemplary flow chart of a method (400) for performing AI-enabled passive detection and ranging, in accordance with an embodiment of the present disclosure. The method (400), at step 402, includes capturing an image of a target or object (106) by an image sensor (102), as shown in FIG. 1. The method (400) further proceeds with enhancing the captured image at step 404 to remove any haziness or blur associated with the image and estimating a size and range of the target at step 406. The estimation is done by the AI model as discussed above with respect to FIGs. 1 and 2. Referring to FIG. 4, method (400), checking of the estimated range confidence is done at step 408, followed by changing an angle of capture at step 410, wherein the angle of capture is associated with the position of the image sensor on the gimbal. The angle is varied in small intervals, for example, at five steps. Upon changing the angle of capture at step 410, confirming the estimated range is performed at step 412 followed by displaying the detected target on a display device at step 414.
[0038] FIG. 5 illustrates an exemplary computer system (500) in which or with which embodiments of the present disclosure may be implemented.
[0039] As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor (570) and communication port(s) (560). The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-242 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects. The memory (530) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (530) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage.
[0040] The bus (520) communicatively couples the processor (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[0041] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[0042] It will be apparent to those skilled in the art that the system of the disclosure may be provided using some or all of the mentioned features and components without departing from the scope of the present disclosure. While various embodiments of the present disclosure have been illustrated and described herein, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the disclosure, as described in the claims.

ADVANTAGES OF THE PRESENT INVENTION
[0043] The present disclosure provides a method to reduce processing and external memory requirements associated with an artificial intelligence (AI)-based system for detecting and ranging targets.
[0044] The present disclosure provides improved accuracy for detecting and ranging target objects.
[0045] The present disclosure reduces time for classifying the target object, for example, 40 ms.
, Claims:1. A system (110) for object detection and ranging, said system (110) comprising:
one or more processors (202); and
a memory (204) operatively coupled to the one or more processors (202), wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to:
obtain an image of a target captured by an image sensor;
enhance the captured image to remove blur;
estimate a size and a range of the target based on a number of pixels associated with the target;
detect a presence of the target based on the estimation; and
display the detected target.
2. The system (110) as claimed in claim 1, wherein the image sensor comprises a day and night camera for capturing the target present in a field of view (FOV) associated with the day and night camera.
3. The system (110) as claimed in claim 1, wherein the image sensor is placed on a gimbal.
4. The system (110) as claimed in claim 3, wherein the gimbal is mounted on an unmanned aerial vehicle (UAV).
5. The system (110) as claimed in claim 3, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to:
change an angle of capture associated with the image sensor by sending a feedback signal to an elevation controller of the gimbal; and
confirm the estimated range.
6. The system (110) as claimed in claim 5, wherein the feedback signal is based on a confidence score associated with the estimated range.
7. A method (400) for object detection and ranging, said method (400) comprising:
obtaining, by one or more processors (202), an image of a target captured by an image sensor;
enhancing, by the one or more processors (202), the captured image to remove blur;
estimating, by the one or more processors (202), a size and a range of the target based on a number of pixels associated with the target;
detecting a presence of the target based on the estimation; and
displaying, by the one or more processors (202), the detected target on a display device.
8. The method (400) as claimed in claim 7, comprising:
rechecking, by the one or more processors (202), a confidence score associated with the detected range for different camera FOV to ascertain the exact range of the target.
9. The method (400) as claimed in claim 7, comprising:
changing, by the one or more processors (202), an angle of capture associated with the image sensor by sending a feedback signal to an elevation controller of a gimbal; and
confirming, by the one or more processors (202), the estimated range.

Documents

Application Documents

# Name Date
1 202341039289-STATEMENT OF UNDERTAKING (FORM 3) [08-06-2023(online)].pdf 2023-06-08
2 202341039289-POWER OF AUTHORITY [08-06-2023(online)].pdf 2023-06-08
3 202341039289-FORM 1 [08-06-2023(online)].pdf 2023-06-08
4 202341039289-DRAWINGS [08-06-2023(online)].pdf 2023-06-08
5 202341039289-DECLARATION OF INVENTORSHIP (FORM 5) [08-06-2023(online)].pdf 2023-06-08
6 202341039289-COMPLETE SPECIFICATION [08-06-2023(online)].pdf 2023-06-08
7 202341039289-ENDORSEMENT BY INVENTORS [15-09-2023(online)].pdf 2023-09-15
8 202341039289-POA [04-10-2024(online)].pdf 2024-10-04
9 202341039289-FORM 13 [04-10-2024(online)].pdf 2024-10-04
10 202341039289-AMENDED DOCUMENTS [04-10-2024(online)].pdf 2024-10-04
11 202341039289-Response to office action [01-11-2024(online)].pdf 2024-11-01