Abstract: SYSTEM AND METHOD FOR OBSTRUCTION DETECTION IN A FIELD OF VIEW (FOV) OF A CAMERA Provided is a system (101) for obstruction detection in a field of view (FOV) of a camera, the system (101) comprising a memory (213) configured to store computer-executable instructions and one or more processors (201) configured to execute the instructions to acquire a plurality of images of the FOV of the camera, wherein the plurality of images includes a first image and a second image. The one or more processors (201) in the system (101) further compare the first image with the second image and determine an obstruction object in the FOV of the camera based on the comparison. The one or more processors (201) further determine a contour of the obstruction object, determine a degree of blockage of the FOV of the camera based on the contour of the obstruction object and generate a notification based on the degree of blockage of the FOV of the camera. << To be published with FIG. 2>>
DESC:SYSTEM AND METHOD FOR OBSTRUCTION DETECTION IN A FIELD OF VIEW (FOV) OF A CAMERA
TECHNOLOGICAL FIELD OF THE INVENTION
[0001] Example embodiments of the present disclosure generally relate to image processing, and more particularly to a method and system for obstruction detection in a field of view (FOV) of a camera.
BACKGROUND
[0002] A camera's field of view (FOV) may be obstructed by an object lying in the FOV near the camera. For example, in case of surveillance cameras, obstructions may appear due to humans’ or animals’ intervention, growth of plant foliage, dust and dirt build-up on the camera lens or porthole, etc. Such obstructions can significantly block, or obscure various regions sought to be captured by the camera. In turn, one or more objects of interest otherwise sought to be captured in such an image may not be significantly visualized and/or readily identifiable in the image. Accordingly, actions reliant on accurate visualization and/or identification of one or more target objects in a captured image may be frustrated. Moreover, some more advance camera systems may be triggered to capture an image in response to events occurring in a scene observed by the camera, e.g., detection of a vehicle or vehicle movement within the scene. In a case where such an event is obscured from the FOV of the camera by an obstruction, the camera may not capture an otherwise desired image. In some cases, where the image is captured, the image may lack clarity due to the obstruction.
[0003] Traditionally, operators of automated/unattended cameras such as those mentioned above rely on human labour-intensive practices to monitor, check and/or verify obstruction-free operation. However, the labour-intensive process is prone to human oversight and/or error. Accordingly, there is a need for accurate detection of a blocked FOV of a camera (i.e. accurate detection of an obstacle) and generation of a notification based on the blocked FOV of the camera.
SUMMARY OF THE INVENTION
[0004] Some example embodiments disclosed herein provide a system for obstruction detection in a field of view (FOV) of a camera. The system comprising a memory configured to store computer-executable instructions and one or more processors configured to execute the instructions to acquire a plurality of images of the FOV of the camera, wherein the plurality of images includes a first image and a second image. The one or more processors are further configured to compare the first image with the second image. The one or more processors are further configured to determine an obstruction object in the FOV of the camera based on the comparison of the first image with the second image. The one or more processors are further configured to determine a contour of the obstruction object. The one or more processors are further configured to determine a degree of blockage of the FOV of the camera based on the contour of the obstruction object and generate a notification based on the degree of blockage of the FOV of the camera.
[0005] According to an embodiment, wherein the one or more processors are further configured to compare a degree of blockage with a threshold value and determine a timer value associated with the obstruction object in the FOV of the camera based on the comparison of the degree of blockage with the threshold value.
[0006] According to an embodiment, the system further comprising the camera configured to capture each of the first image at a first time instant and the second image at a second time instant.
[0007] According to an embodiment, wherein the one or more processors are further configured to extract one or more attributes of each of the first image and the second image and the one or more attributes include at least a pixel intensity.
[0008] According to an embodiment, the one or more processors are further configured to compare the first image with the second image based on the extracted one or more attributes of each of the first image and the second image.
[0009] According to an embodiment, the one or more processors are further configured to determine that the obstruction object corresponds to one of temporary obstruction or permanent obstruction based on a threshold time period.
[0010] According to an embodiment, wherein the notifications corresponds to one or more of an audio message, a video message, an audio-visual message, a vibration notification, or a text message.
[0011] Some example embodiments disclosed herein provide a method for obstruction detection in a field of view (FOV) of a camera. The method comprises acquiring a plurality of images of the FOV of the camera, wherein the plurality of images includes a first image and a second image. The method may further include comparing the first image with the second image. The method may further include determining an obstruction object in the FOV of the camera based on the comparison of the first image with the second image. The method may further include determining a contour of the obstruction object, determining a degree of blockage of the FOV of the camera based on the contour of the obstruction object and generating a notification based on the degree of blockage of the FOV of the camera.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
[0013] FIG. 1 illustrates a schematic diagram of a network environment of a system for obstruction detection in a field of view (FOV) of a camera, in accordance with an example embodiment.
[0014] FIG. 2 illustrates a block diagram of the system for detecting the obstruction in the FOV of the camera, in accordance with an example embodiment.
[0015] FIG. 3 illustrates a flow chart representing a sequence of steps for obstruction detection in the FOV of the camera, in accordance with an example embodiment.
[0016] FIGs. 4A-4C illustrate an exemplary scenario of a working environment of the system for detecting obstruction in the FOV of the camera, in accordance with an example embodiment.
[0017] FIG. 5 illustrates another exemplary scenario of a working environment of the system for detecting obstruction in the FOV of the camera, in accordance with an example embodiment.
[0018] FIG. 6 illustrates a flow diagram of a method for obstruction detection in the FOV of the camera, in accordance with an example embodiment.
DETAILED DESCRIPTION OF THE INVENTION
[0019] FIG. 1 illustrates a schematic diagram of a network environment 100 of a system 101 for obstruction detection in a field of view (FOV) of a camera, in accordance with an example embodiment. The system 101 may be communicatively coupled to a user device 103 and a cloud 105 via a network 107. The user device 103 may further include an application 103a installed in it and one or more sensors 103b (e.g., a camera) to capture video of surrounding of the user device 103. Further, it is possible that one or more components may be rearranged, changed, added, and/or removed in the user device 103.
[0020] In an example embodiment, the system 101 may be embodied in one or more of several ways as per the required implementation. For example, the system 101 may be embodied as a cloud based service or a cloud based platform. As such, the system 101 may be configured to operate outside the user device 103. However, in some example embodiments, the system 101 may be embodied within the user device 103. In each of such embodiments, the system 101 may be communicatively coupled to the components shown in FIG. 1 to carry out the desired operations and wherever required modifications may be possible within the scope of the present disclosure. In one example embodiment, the system 101 may be implemented in a vehicle, where the vehicle may be an autonomous vehicle, a semi-autonomous vehicle, or a manually-driven vehicle. Further, in some example embodiments, the system 101 may be a standalone unit configured to detect obstruction in the FOV of the camera (i.e., the one or more sensors 103b) embodied in the user device 103.
[0021] In some example embodiments, the user device 103 may be any user accessible device such as a smartphone, a portable computer, a vehicle, or any device that may be configured to capture images in real-time to execute one or more functions. The user device 103 may comprise a processor, a memory and a communication interface and one or more different modules to perform different functions. The processor, the memory and the communication interface may be communicatively coupled to each other. In some example embodiments, the user device 103 may be communicatively coupled to the system 101 and may be used for generation of notification to guide a user. In some example embodiments, the user device 103 may be associated, coupled, or otherwise integrated with a vehicle of the user, such as an advanced driver assistance system (ADAS), a personal navigation device (PND), a portable navigation device, an infotainment system and/or other device that may be configured to provide guidance and navigation related functions to the user. In such example embodiments, the user device 103 may comprise processing means such as a central processing unit (CPU), storage means such as on-board read only memory (ROM) and random access memory (RAM), the one or more sensors 103b such as a camera, a position sensors (such as a GPS sensor), a gyroscope, a LIDAR sensor, a proximity sensor, motion sensors (such as accelerometer), a display enabled user interface (such as a touch screen display), and other components as may be required for specific functionalities of the user device 103. For example, the notifications generated via the application 103a installed in the vehicle may be an output to notify/alert a driver of the vehicle about an obstruction in the FOV of the camera of the vehicle. In some example embodiments, the cloud 105 may store sensor data obtained by the one or more sensors 103b (e.g. the camera) present in the user device 103. The sensor data may include one or more units of different region of interests, which may be used later by the system 101.
[0022] The network 107 may be wired, wireless, or any combination of wired and wireless communication networks, such as cellular, Wi-Fi, internet, local area networks, or the like. In one embodiment, the network 107 may include one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network, and the like, or any combination thereof. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks (for e.g. LTE-Advanced Pro), 5G New Radio networks, ITU-IMT 2020 networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), and the like, or any combination thereof.
[0023] FIG. 2 illustrates a block diagram of the system 101 for detecting the obstruction in the FOV of the camera, in accordance with an example embodiment. The system 101 may include a processing means such as at least one processor 201 (hereinafter, also referred to as “processor 201”), storage means such as at least one memory 213 (hereinafter, also referred to as “memory 213”), and a communication means such as at least one communication interface 215 (hereinafter, also referred to as “communication interface 215”). The processor 201 may retrieve computer program code instructions that may be stored in the memory 213 for execution of the computer program code instructions. The processor 201 may include one or more different modules to perform different functions, such as a motion manager module 203, an image provider service (IPS) 205, a recording manager module 207, a Region-of-Interest (ROI) manager module 209, and an obstruction detection module 211.
[0024] The processor 201 may be embodied in a number of different ways. For example, the processor 201 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor 201 may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor 201 may include one or more processors configured in tandem via a bus (not shown in FIG. 2) to enable independent execution of instructions, pipelining and/or multithreading.
[0025] The motion manager module 203 may execute motion monitoring in motion related events. The motion manager module 203 may use the sensor data obtained by the one or more sensors 103b in the user device 103 to detect motion of one or more objects in a region of interest (such as the FOV of the camera). To that end, the motion manager module 203 may register with the IPS 205 for fetching images of the one or more objects. The motion manager module 203 may detect the motion of the one or more objects based on the images including the one or more objects. When the motion manager module 203 detects motion or stops detecting motion, the motion manager module 203 may provide an event callback. Accordingly, other modules in the system 101, registered with motion manager module 203, may receive notifications for such event callback. In an embodiment, motion manager module 203 may also execute zone based monitoring which allows a user to mark areas of interest in the FOV of the camera and generates events only if the motion is detected in marked areas of interest.
[0026] The IPS 205 may perform functions related to capturing one or more images of the surroundings, of the user device 103, in the FOV of the camera of the user device 103. The one or more images may correspond to image frames of a video of the surroundings of the user device 103. In some example embodiments, the IPS 205 may upload the one or more images on the cloud 105. In some implementations, if the user device 103 has multiple camera instances, the IPS 205 may have one IPS instance per camera. In an embodiment, the IPS 205 may capture images continuously at 5 frames per second via a camera. Further, each module/service which requires image frames may request the image frames from the IPS 205. For example, the motion manager module 203 may request the IPS 205 to obtain the image frames periodically. In some embodiments, the IPS 205 maintains a record of all the images stored on the user device 103. In an embodiment, if any module generates any bookmark event with respect to any image, it may request the IPS 205 to save the corresponding image. The IPS 205 may save that image either on the memory 213 or on the cloud 105 as per a subscription policy. In some embodiments, the IPS 205 may perform regular scheduling task to delete all images for which no events are generated by any module of the processor 201 and the request to save image is not received with in a particular duration e.g. 30 seconds.
[0027] The recording manager module 207 may perform all recording related operations on the user device 103, e.g. start/stop the recording of the one or more images captured by the camera and generate bookmark events if required. The recording manager module 207 may start the recording based on one or more configurations from a mobile application (i.e. the application 103a). In some example embodiments, the recording manager module 207 may stop the recording as soon as detection of motion stops irrespective of the one or more configuration.
[0028] The ROI manager module 209 may create and manage an ROI i.e. a region of interest in the surroundings of user device 103. In some example embodiment, the ROI may correspond to the FOV of the camera. ROI manager module 209 may also fetch ROI details and save the details in a database.
[0029] The obstruction detection module 211 may correspond to a service to detect obstruction such as objects and faces obstructing the FOV of the user device 103 (e.g. the FOV of the camera).The obstruction detection module 211 may be registered with the IPS 205 and recording manager module 207 to receive one or more images captured by the camera. The obstruction detection module 211 may receive images from the IPS 205 and the recording manager module 207, and perform comparison of pixels of the received images to determine the obstruction in the FOV of the camera. The obstruction detection module 211 may further determine a contour of one or more obstruction objects. Further, the obstruction detection module 211 may determine a degree of blockage of the FOV of the camera based on the contour of the one or more obstruction objects.
[0030] Additionally or alternatively, the processor 201 may include one or more processors capable of processing large volumes of workloads and operations to provide support for big data analysis. In an example embodiment, the processor 201 may be in communication with the memory 213 via a bus for passing information among components coupled to the system 101.
[0031] The memory 213 may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory 213 may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a machine (for example, a computing device like the processor 201). The memory 213 may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present invention. For example, the memory 213 may be configured to buffer input data for processing by the processor 201. As exemplarily illustrated in FIG. 2, the memory 213 may be configured to store instructions for execution by the processor 201. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 201 may represent an entity (for example, physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Thus, for example, when the processor 201 is embodied as an ASIC, FPGA or the like, the processor 201 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor 201 is embodied as an executor of software instructions, the instructions may specifically configure the processor 201 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 201 may be a processor specific device (for example, a mobile terminal or a fixed computing device) configured to employ an embodiment of the present invention by further configuration of the processor 201 by instructions for performing the algorithms and/or operations described herein. The processor 201 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 201.
[0032] The communication interface 215 may comprise an input interface and an output interface for supporting communications to and from the user device 103 or any other component with which the system 101 may communicate. The communication interface 215 may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data to/from a communications device in communication with the user device 103. The communication interface 215 may comprise an input/output interface in which a user may interact to adjust one or more user-configurable aspects (e.g., alert or notification criteria) of the system 101. In some embodiments, the communication interface 215 may include a display and one or more input keys for entering information related to the one or more user-configurable aspects.
[0033] FIG. 3 illustrates a flow chart 300 representing a sequence of steps for obstruction detection in the FOV of the camera, in accordance with one embodiment. In some embodiments, the sequence of steps is executed by the processor 201 of the system 101. Accordingly, blocks of the flow diagram support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions.
[0034] At step 301, the method may include monitoring an area in the FOV of the camera of the user device 103 by the system 101 by using live imagery captured by the camera. The camera may be a digital camera and may be either a still picture camera or a video camera. The FOV corresponds to a range of the area that may be captured by the camera at a particular position and orientation in space. The objects outside the FOV may not be captured by the camera. The camera may have an effective focal range. Objects within the focal range are generally in focus, while objects outside the focal range are generally out of focus. Some embodiments are based on recognition that only in focus objects are captured by the camera. Some other example embodiments may consider out of focus objects as well for capturing images.
[0035] At step 303, the method may include obtaining a first image associated with the FOV of the camera at a first time instant (t1) and at step 305, the method may include obtaining a second image associated with the FOV of the camera at a second time instant (t2). When referring to a captured or otherwise obtained image from the camera, it is intended to mean an image from a picture camera or a still frame from a video camera. The first image associated with the FOV of the camera at the first instant is the image captured/obtained at time t1 and the second image associated with the FOV of the camera is the second image captured/obtained at time t2. For example, the system 101 may obtain an image of a park at time instant, say 5.00 PM (i.e. first image at first time instant) and another image of the same park at 5:01 PM (i.e. second image at second time instant), wherein both the images are captured by the same camera. In case of a video (live), a first frame is obtained by the system 101 at time instant, e.g. 5.00 PM of the live recording and the second frame is obtained at time instant e.g. 5:01 PM. In some example embodiments, images may be obtained at any time instant.
[0036] Further, attributes such as landmark points for the first image and the second image are extracted. The attributes may be extracted by executing feature extraction on both the images (i.e., the first image and the second image). The attributes of the first image and the second image may include, but not limited to, pixel intensity of each pixel of the first image and the second image. In an example embodiment, the feature extraction may be performed using algorithms/techniques such as OpenCV feature extraction using key point detectors, the dissimilarity/distance metric using Siamese convolutional neural networks (convNet), image processing techniques, and the like.
[0037] At step 307, the method may include comparing the first image and the second image. If the extracted attributes of the first image and the second image are same, the control passes to the step 307. The comparison of the first image and the second image includes comparison of the extracted attributes such as landmark points (or key point detectors) of the first image and the second image. In some example embodiments, pixel intensities associated with the first image and the second image are compared. If the pixel intensities are matched (i.e. the pixel intensity associated with the first image is equal to the predefined pixel intensity associated with the second image), then the first image and the second image are determined as identical images. In some example embodiments, due to exposure of light the pixel intensity of the images may change. For example, the first image is obtained at the time of dusk, e.g. 6.45 PM. The subsequently obtained second image is exposed to streetlight, e.g. the image obtained at 6.46 PM. In such cases, the second image may indicate higher pixel intensity. (i.e. change in pixel intensities). For such cases, the system 101 may trigger a verification process before declaring the FOV as blocked. In the verification process, pixel equalization may be performed by the system 101. In some example embodiments, if the first image and the second are not identical, it indicates that the FOV of the camera is blocked by one or more objects. In such cases, the control is passed to the verification process.
[0038] In some example embodiments, if the comparison of the extracted attributes indicates both the images are identical, then the control goes back to step 305. If the comparison of the extracted attributes indicates that both the images are not identical, the system 101 triggers the verification process. The verification process is necessary as there are high chances of detection of a false positive. The false positive indicates inaccurate generation of notifications. In the verification process, an algorithm is used to detect a degree to which the FOV of the camera is blocked and also intensity of the light present at step 309.
[0039] At step 309, the method may include determining a contour of one or more objects blocking the FOV of the camera. In an example embodiment, the system 101 may determine the contour based on an algorithm (e.g. Gaussian Mixture Model based background subtraction algorithm). The Gaussian mixture model is a method for real-time segmentation of moving regions in image sequences which involves back-ground subtraction or thresholding the error between the first image and the second image. The degree to which the FOV of the camera is blocked is determined based on the determined contour. More specifically, the degree to which the FOV of the camera is blocked is based on a size or an area of the determined contour.
[0040] At step 311, the method includes checking if the degree of blockage is above a threshold value. In some example embodiments, if the degree, to which the FOV of the camera is blocked, is below the threshold value the control is passed to step 307. If the degree to which the FOV of the camera is blocked is above the threshold value, then control is passed to the step 313 which includes checking duration of the blockage.
[0041] At step 313, the method may include determining a timer value (T), by the system 101. The timer value T indicates the duration for which the FOV is blocked. The timer value T is determined based on the time instant t1 and the time instant t2. Initially, the timer value (T) is set to 0 i.e. T= 0. Subsequently, the timer value T is updated as T=T+ (t2-t1). Once the timer value is determined, the control is passed to step 315.
[0042] At step 315 the system 101 checks if the timer value (T) is above a threshold value of the timer. If the condition is true, the control passes to the step 317. In some example embodiments, if the timer value is below the timer threshold, then the control passes to the step 307. For example, the timer threshold is 120 seconds and the FOV is blocked for 121 seconds. This implies that the duration of the blockage is greater than the timer threshold. As a result of this, the obstruction is causing the blockage in the FOV is verified.
[0043] In some example embodiments, the FOV may be blocked by temporary interventions such as a person blocking the FOV of the camera by touching a lens of the camera. In such cases, based on the timer threshold value the system 101 determines that blockage of the FOV is temporary (i.e. the duration of blockage is below the threshold timer value). For example, the timer threshold is 30 seconds. If the FOV is blocked completely for 20 seconds, the system 101 may determine that the blockage of the FOV is temporary, since the duration of the blockage is below the timer threshold value.
[0044] At step 317, the method may include generating the notification, by the system 101. The notifications may include an audio message, a video message, an audio-visual message, a vibration notification, or a text message. The notification may be sent to a user. Based on the notification, the user may initiate a preventive measure or raise an alarm. In some other example embodiments, the system 101 may trigger certain actions based on the generated notification. For example, in case of a surveillance camera (such as the camera of the user device 103) equipped with alarms, the system 101 may trigger the alarms to notify one or more persons regarding the obstruction. In case of autonomous vehicles with cameras (e.g., the camera of the user device 103), based on the generated notification the system 101 may determine navigation data for the autonomous vehicle. The obstruction detected at step 307 is verified in a two-fold process, respectively at the steps 311 and 315. A final action is determined on the basis of two-step verification process. Even if few sample images are declared as detected obstruction at the step 307 and the degree of blockage associated with those samples are below a threshold value, such samples are filtered at step 311. Further samples having the degree of blockage above a threshold value and the duration of blockage is less than the timer threshold value, are filtered at the step 315. Hence, the case of generating an erroneous notification and subsequently raising a false alarm is reduced significantly.
[0045] FIGs. 4A to 4C illustrate an exemplary scenario 400 of a working environment of the system 101 for detecting obstruction in the FOV of the camera, in accordance with an example embodiment. In FIGs. 4A to 4C, a vehicle 401 with the system 101 (not shown) and a camera 403 (the camera 403 is similar to the camera of the user device 103) is shown. The camera 403 may capture a first image 405a, of a scene in the FOV of the camera, at a first instant of time t1 as shown in FIG. 4A. Further, the camera 403 may capture a second image 405b, of the scene in the FOV of the camera, at second instant of time t2 as shown in FIG. 4B. After obtaining both the first image 405a and the second image 405b, the system 101 may compare the first image 405a and the second image 405b, based on one or more attributes. The one or more attributes may include pixel intensity and the like. In an embodiment, the first image 405a and the second image 405b captured by the camera 403 may be two consecutive images. In another embodiment, the first image 405a may be stored in the cloud 105 and the second image 405b may be captured by the camera 403 in real time. In some other embodiments, the first image 405a may be stored in the camera 403 to support edge analytics such that cost of the cloud 105 is saved. Further, the second image 405b may be captured by the camera 403 in real time.
[0046] Based on the comparison of the first image 405a and the second image 405b, the system 101 may detect one or more new objects such as 407a and 407b in the FOV of the camera 403 from the second image 405b. To that end, the system 101 may determine a contour 409a and a contour 409b of the one or more new objects 407a and 407b blocking the FOV of the camera 403, as shown in FIG. 4C. Such objects may also be called as obstruction objects 407a and 407b in the FOV of the camera 403. The system 101 may determine if the degree of blockage by the contour 409a and 409b of one or more new objects 407a and 407b is above a threshold value or below a threshold value. The degree of blockage is determined to identify whether the obstruction is a temporary obstruction or permanent obstruction. If the degree of blockage is above a threshold time value, the system 101 may determine the new objects 407a and 407b as a permanent obstruction in the FOV of the camera 403. More specifically, the new objects 407a and 407b may be considered as obstructions in the FOV of the camera 403 if the new objects 407a and 407b remains at the same location for more than a threshold period of time and blocks the FOV of the camera 403.
[0047] The system 101 may determine the timer value T for which the FOV of the camera 403 is blocked. The timer value T is determined based on the time instant t1 and the time instant t2. If the timer value T is above a threshold value, the system 101 may generate notification to an occupant of the vehicle 501 via a mobile application (such as the application 103a in the user device 103). In an embodiment, the notifications may include an audio message, a video message, an audio-visual message, a vibration notification, or a text message. The notification may be sent to the occupant of the vehicle 401 via output devices in the vehicle 401 such as a speaker, a display device, and the like. Based on the notification, the occupant may initiate a preventive measure or raise an alarm. In some other example embodiments, the system 101 may trigger certain actions based on the generated notification, such as automatically applying brakes to the vehicle 401, and the like.
[0048] FIG. 5 illustrates another exemplary scenario 500 of a working environment of the system 101 for detecting obstruction in the FOV of the camera, in accordance with an example embodiment. In FIG. 5, there is shown a camera 501 in a premises such as a house, a building, a bank, and the like. The camera 501 may correspond to the camera 403 and the camera of the user device 103. There is further shown a person 503 creating an obstruction by covering the camera 503 with a cloth 507 (or a box) such that the FOV 505 of the camera 501 is blocked. The camera 501 may be a surveillance camera configured to record video of the premises at a time when the FOV 505 of the camera 501 is being obstructed by the person 503.
[0049] In an example embodiment, the camera 501 may continuously record the video of the premises in the FOV 505 of the camera 501 and store images (such as image frames) comprising the video on the cloud 105. The obstruction detection module 211 may continuously fetch newly captured images of the video and compare them with the previously stored images of the video to detect the obstruction.
[0050] In an example embodiment, the camera 501 may capture an image at a time instant when the camera 501 is being blocked by a person 503 in real time. The obstruction detection module 211 may compare such image with the previously stored images. More specifically, the obstruction detection module 211 may compare pixels of the images to detect the obstruction in the FOV 505 of the camera 501. Further, the obstruction detection module 211 may determine contour of the obstruction as described in description of FIG. 3. The contour of the obstruction may be further compared with a threshold value to determine a degree of blockage. In some embodiments, based on the determined degree of blockage, the system 101 may further determine if the FOV 505 is partially blocked or completely blocked.
[0051] Further, in some embodiments, the system 101 may generate an alert or a notification to an owner, security guard, and/or control room members of the premises based on the detection of the obstruction. For example, if a thief tries to block the FOV 505 of the camera 503, the notification or alert may be sent to the security or the control room member. The notification may be one or more of a text message, a security alarm, an audio message, a video message, an audio- video message, or a vibration message.
[0052] FIG. 6 illustrates a flow diagram of a method 600 for obstruction detection in the FOV of the camera, in accordance with an example embodiment. It will be understood that each block of the flow diagram of the method 600 may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other communication devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory 213 of the system 101, employing an embodiment of the present invention and executed by a processor 201. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flow diagram blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flow diagram blocks.
[0053] Accordingly, blocks of the flow diagram support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flow diagram, and combinations of blocks in the flow diagram, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. The method 600 illustrated by the flowchart diagram of FIG. 6 is for obstruction detection in the FOV of the camera of the user device 103 or the camera 403. Fewer, more, or different steps may be provided.
[0054] At step 601, the method comprises acquiring a plurality of images of the FOV of the camera, wherein the plurality of images includes a first image and a second image (e.g. the first image 405a and the second image 405b). The plurality of images is captured by the camera 403 at different time instants.
[0055] At step 603, the method comprises comparing the first image 405a with the second image 405b. The system 101 may extract one or more features of both the first image and the second image. The system 101 further may compare both the first image 405a and the second image 405b based on the extracted features. In an embodiment, the one of the features may include comparison of pixel intensity.
[0056] At step 605, the method comprises determining an obstruction object in the FOV of the camera based on the comparison of the first image 405a with the second image 405b. The obstruction may be a temporary obstruction or permanent obstruction, which is determined based on a threshold time period value.
[0057] At step 607, the method comprises determining a contour of the obstruction object. At step 609, the method comprises determining a degree of blockage of the FOV of the camera based on the contour of the obstruction object. The degree of blockage is determined based on a threshold value of blockage.
[0058] At step 611, the method comprises generating a notification based on the degree of blockage of the FOV of the camera. The system 101 determines the blockage in FOV of the camera and assist user by generating notification associated with the determined blockage.
[0059] In an example embodiment, the system for performing the method 600 described above may comprise a processor configured to perform some or each of the operations (601-611) described above. The processor may, for example, be configured to perform the operations (601- 611) by performing hardware implemented logical functions, executing stored instructions, or executing algorithms for performing each of the operations. Alternatively, the system may comprise means for performing each of the operations described above. In this regard, according to an example embodiment, examples of means for performing operations (601- 611) may comprise, for example, the processor and/or a device or circuit for executing instructions or executing an algorithm for processing information as described above.
[0060] In this way example embodiments of the present disclosure provide a system and a method for obstruction detection in FOV of a camera. The disclosed method provides significant advantage in terms of obstruction detection in FOV of the camera and reducing generation of erroneous notifications. The obstruction detection process is performed in two stages hence the process provides a high accuracy in detection of obstruction in the FOV of the camera. Consequently, the generation of the erroneous notification is reduced significantly. Hence triggering wrong course of actions may be avoided. For example, the notifications may be provided to a driver of a vehicle if there is any obstruction in FOV of the camera installed within the vehicle, which helps the driver to take action before any hazardous situation takes place.
[0061] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the invention. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the invention. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
,CLAIMS:
We Claim:
1. A system (101) for obstruction detection in a field of view (FOV) of a camera (403), the system (101) comprising:
a memory (213) configured to store computer-executable instructions; and
one or more processors (201) configured to execute the instructions to:
acquire a plurality of images of the FOV of the camera (403), wherein the plurality of images includes a first image (405a) and a second image (405b);
compare the first image (405a) with the second image (405b);
determine an obstruction object (407a or 407b) in the FOV of the camera (403) based on the comparison of the first image (405a) with the second image (405b);
determine a contour (409a or 409b) of the obstruction object (407a or 407b);
determine a degree of blockage of the FOV of the camera (403) based on the contour (409a or 409b) of the obstruction object (407a or 407b); and
generate a notification based on the degree of blockage of the FOV of the camera (403).
2. The system (101) of claim 1, wherein the one or more processors (201) are further configured to
compare the degree of blockage with a threshold value, and
determine a timer value associated with the obstruction object (407a or 407b) in the FOV of the camera (403) based on the comparison of the degree of blockage with the threshold value.
3. The system (101) of claim 1, further comprising the camera (403) configured to capture each of the first image (405a) at a first time instant and the second image (405b) at a second time instant.
4. The system (101) of claim 3, wherein
the one or more processors (201) are further configured to extract one or more attributes of each of the first image (405a) and the second image (405b), and
the one or more attributes include at least a pixel intensity.
5. The system (101) of claim 4, the one or more processors (201) are further configured to compare the first image (405a) with the second image (405b) based on the extracted one or more attributes of each of the first image (405a) and the second image (405b).
6. The system (101) of claim 1, the one or more processors (201) are further configured to determine that the obstruction object (407a or 407b) corresponds to one of temporary obstruction or permanent obstruction based on a threshold time period.
7. The system (101) of claim 1, wherein the notification corresponds to one or more of an audio message, a video message, an audio-visual message, a vibration notification, or a text message.
8. A method for obstruction detection in a field of view (FOV) of a camera (403), the method comprising:
acquiring (601) a plurality of images of the FOV of the camera (403), wherein the plurality of images includes a first image (405a) and a second image (405b);
comparing (603) the first image (405a) with the second image (405b);
determining (605) an obstruction object (407a or 407b) in the FOV of the camera (403) based on the comparison of the first image (405a) with the second image (405b);
determining (607) a contour (409a or 409b) of the obstruction object (407a or 407b);
determining (609) a degree of blockage of the FOV of the camera (403) based on the contour (409a or 409b) of the obstruction object (407a or 407b); and
generating (611) a notification based on the degree of blockage of the FOV of the camera (403).
| # | Name | Date |
|---|---|---|
| 1 | 201911039183-FER.pdf | 2022-01-28 |
| 1 | 201911039183-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2019(online)].pdf | 2019-09-27 |
| 2 | 201911039183-PROVISIONAL SPECIFICATION [27-09-2019(online)].pdf | 2019-09-27 |
| 2 | 201911039183-FORM 18 [27-11-2020(online)].pdf | 2020-11-27 |
| 3 | 201911039183-Proof of Right [17-07-2020(online)].pdf | 2020-07-17 |
| 3 | 201911039183-FORM 1 [27-09-2019(online)].pdf | 2019-09-27 |
| 4 | 201911039183-DRAWINGS [27-09-2019(online)].pdf | 2019-09-27 |
| 4 | 201911039183-COMPLETE SPECIFICATION [11-06-2020(online)].pdf | 2020-06-11 |
| 5 | 201911039183-CORRESPONDENCE-OTHERS [11-06-2020(online)].pdf | 2020-06-11 |
| 5 | abstract.jpg | 2019-10-05 |
| 6 | 201911039183-DRAWING [11-06-2020(online)].pdf | 2020-06-11 |
| 6 | 201911039183-FORM-26 [26-12-2019(online)].pdf | 2019-12-26 |
| 7 | 201911039183-DRAWING [11-06-2020(online)].pdf | 2020-06-11 |
| 7 | 201911039183-FORM-26 [26-12-2019(online)].pdf | 2019-12-26 |
| 8 | 201911039183-CORRESPONDENCE-OTHERS [11-06-2020(online)].pdf | 2020-06-11 |
| 8 | abstract.jpg | 2019-10-05 |
| 9 | 201911039183-COMPLETE SPECIFICATION [11-06-2020(online)].pdf | 2020-06-11 |
| 9 | 201911039183-DRAWINGS [27-09-2019(online)].pdf | 2019-09-27 |
| 10 | 201911039183-Proof of Right [17-07-2020(online)].pdf | 2020-07-17 |
| 10 | 201911039183-FORM 1 [27-09-2019(online)].pdf | 2019-09-27 |
| 11 | 201911039183-PROVISIONAL SPECIFICATION [27-09-2019(online)].pdf | 2019-09-27 |
| 11 | 201911039183-FORM 18 [27-11-2020(online)].pdf | 2020-11-27 |
| 12 | 201911039183-STATEMENT OF UNDERTAKING (FORM 3) [27-09-2019(online)].pdf | 2019-09-27 |
| 12 | 201911039183-FER.pdf | 2022-01-28 |
| 1 | 201911039183E_25-01-2022.pdf |