Abstract: ABSTRACT A REAL-TIME EVENT MONITORING AND ALERTING SYSTEM AND METHOD THEREOF Discloses is a real-time event monitoring and alerting system. A server (101) may receive a visual media from a primary image capturing unit (102) based upon occurrence of an event at the location. and alert a secondary image capturing unit (103) to capture a supplementary visual media associated to the visual media. The secondary image capturing unit (103) may capture and perform partial analysis of the supplementary visual media and send it to the server (101). The server (101) may then perform comprehensive analysis of the visual media received from the primary (102) and secondary image capturing unit (103) . The server (101) may determine type of event occurred at the location based upon the comprehensive analysis step. The server (101) may then transmit an alert notification to a user device (104) for assisting a concerned user to take appropriate action in response to the event determined at the location. [To be published with Figure 1]
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
A REAL-TIME EVENT MONITORING AND ALERTING SYSTEM AND METHOD THEREOF
APPLICANT:
Zensar Technologies Limited
An Indian entity having address at:
Zensar Knowledge Park, Plot # 4, MIDC, Kharadi, Off Nagar Road, Pune-411014, Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims priority from Indian Provisional Patent Application No. 201821049818 filed on 29th December 2018, the entirety of which are incorporated herein by a reference.
TECHNICAL FIELD
The present subject matter described herein, in general relates to a real-time event monitoring and alerting system. In particular, the present subject matter is related to a smart alerting system for enabling immediate remedial action in response to the detection of an event.
BACKGROUND
The technological advancement in event monitoring and alerting systems has explored every aspect of human life. Technical evolution in monitoring and alert generating applications has extensively influenced the lifestyle of human being across various location related aspects and operations.
In the present scenario, the event monitoring and alerting systems are handled by camera enabled electronic devices that are capable of capturing the visual scenes. Although such monitoring systems have been successful in conveying the real time data unfolding at specific locations, these systems haven’t been able to solve the problem of taking immediate subsequent actions in case of any suspicious event or activity taking place at the particular location.
Consider a scenario of an accident that happened on a highway. Now, monitoring systems employed at such places include a smart camera or CCTV camera installed on the highway, etc. The CCTV camera in such cases help in determining the happenings or the real time activity taking place at the accident spot. Today, if any such mishap occurs, the concerned authority gathers data from the CCTV camera placed at the spot. Thus, by collecting this data the investigation with regards to the mishap will take place. Unfortunately, the present systems do not allow any immediate remedial action to be initiated since the information of the mishap is not conveyed in real time so that the corrective action can begin.
In case of a theft scene, the CCTV cameras installed at suspicious location do help in identifying the occurrence of the theft event. However, the input feed of CCTV does not solve the problem of taking immediate remedial action by sending alerts to the concerned personnel or individual so that such event is handled and tackled in real time.
Hence, primarily in today’s scenario it is difficult to get to know in advance or in real time if any such suspicious event is occurring at the specific location. These suspicious events may include happenings on the roads, flood scenarios, presence of potholes, traffic jams, accidents, thefts, etc. Such events lead to traffic jams, accidents or even deaths on roads.
In such crime cases or mishaps occurring on the roads, remedial action is delayed since the present system does not facilitate or provide an infrastructure for tackling the events in real time. Presently, tracking such events and taking action against it is an extra manual process as it needs the intervention of the government authority or the concerned individual or a human resource to identify and take corresponding action once the mishap or crime has already happened. Further, the existing systems fail to detect the exact type of event that may have occurred, primarily due to lack of efficient resources that would validate the captured data and perform real time analytics on the captured data to derive real time insights on the events occurring at different locations.
Therefore, there is long felt need for a system that performs real time monitoring of events by capturing the real time visual scenes and provides alerts to the respective people, so that they can take action on the issues in real time so that the problems of traffic jams, congestion on roads, road thefts, etc. are efficiently tackled.
SUMMARY
This summary is provided to introduce concepts related to event monitoring and alerting system. This summary is not intended to identify essential features of the claimed subject matter is not intended for use on determining or limiting the scope of the claimed subject matter.
In one embodiment, a real-time event monitoring and alerting system is disclosed. The system may comprise a primary image capturing unit, a secondary image capturing unit, a user device and a server. The server may comprise a memory coupled with a processor, wherein the processor is configured to execute programmed instructions stored in the memory. The server may be configured to receive a visual media from a primary image capturing unit. The primary image capturing unit may be configured to capture the visual media of a location, wherein the image capturing unit is installed at the location. The visual media may be captured by the primary image capturing unit based upon occurrence of an event at the location. The server may then alert the secondary image capturing unit that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analyzed visual media. The server ma then receive the partially analyzed visual media captured from the secondary image capturing unit. Further, the server may perform comprehensive analysis of the visual media and the partially analyzed visual media that are received from the primary image capturing unit and the secondary image capturing unit respectively. The server may then determine the type of event that occurred at the location based upon the comprehensive analysis of the visual media and the partially analyzed visual media. The server may then transmit a signal to an alert notification unit, wherein the alert notification unit, upon receipt of the signal, may be configured to generate an alert notification. The alert notification unit may then transmit the alert notification to the user device for assisting the user of the user device to take an appropriate action based upon the type of the event determined at the location.
In another embodiment, a method for real-time event monitoring and alerting is disclosed. The method may first receive a visual media from a primary image capturing unit. The primary image capturing unit may be configured to capture the visual media of a location, wherein the image capturing unit is installed at the location. The visual media may be captured by the primary image capturing unit based upon occurrence of an event at the location. The method may then alert a secondary image capturing unit that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analyzed visual media. The method may then receive the partially analyzed visual media captured from the secondary image capturing unit. The method may then perform comprehensive analysis of the visual media and the partially analyzed visual media received from the primary image capturing unit and the secondary image capturing unit respectively. The method may then determine the type of event that occurred at the location based upon the comprehensive analysis of the visual media and the partially analyzed visual media. The method may then transmit a signal to an alert notification unit, wherein the alert notification unit, upon receipt of the signal, is configured to generate an alert notification. The method may then transmit the alert notification to a user device for assisting a user of the user device to take an appropriate action based upon the type of the event determined at the location
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
Figure 1 illustrates a real-time event monitoring and alerting system comprising a primary image capturing unit (102), a secondary image capturing unit (103), a user device (104) and a server (101), in accordance with an embodiment of the present disclosure.
Figure 2 illustrates the server (101) and its components in detail, in accordance with an embodiment of the present disclosure.
Figure 3 discloses steps (300) involved in the implementation of overall system implementation (100) via a processor, in accordance with an embodiment of the present disclosure.
Figure 4 discloses a method (400) of implementation of the real-time event monitoring and alerting system, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
Reference throughout the specification to “various embodiments,” “some embodiments,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in various embodiments,” “in some embodiments,” “in one embodiment,” or “in an embodiment” in places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items.
It must also be noted that, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary methods are described. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
Various modifications to the embodiment may be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art may readily recognize that the present disclosure is not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
The present disclosure solves the problem of real time event monitoring and alerting that takes immediate actions based on the severity of the suspicious event or activity such as the accident, traffic jam, theft, etc. taking place at any location. In other words, the present subject matter is related to a smart alerting system for enabling immediate remedial action in response to the detection of an event.
The real time event monitoring and alerting system may be configured to provide smart alerting in one or more locations, but not limited to, street, roads, highways, banks, etc. Further, the visual media of the present disclosure may include images, videos, CCTV footage, etc.
Figure 1 illustrates a system (100) for real-time event monitoring and smart alerting in accordance with an embodiment of the present disclosure. The system (100) may include a primary image capturing unit (102), a secondary image capturing unit (103), a user device (104) and a server (101).
Referring now to figure 1, the server (101) may receive a visual media from a primary image capturing unit (102). The primary image capturing unit may include the CCTV camera installed on a street within a city area that captures the real time visual media of the street. The CCTV camera may capture the image of an accident event that occurred on the street. The captured image may be communicated to the server (101). The server (101) may initialize analyzing the received image from the CCTV camera. The server (101) based on the analysis may determines that it needs additional information or data that can’t be collected or gathered from the CCTV camera. The server (101) may then alert the unmanned aerial vehicle (UAV) e.g. a drone that may be within a predefined distance from the street. Here, predefined distance implies a location that is within proximity to the street where the event has occurred. Immediately, the drone may reach the target street and captures a real time footage. The drone may then perform partial analysis on the captured footage. It must be noted herein that the partial analysis implies primary analysis of the captured visual media. Here, partial analysis may involve determining the type of event that occurred at the location, wherein type of event may include vehicles damaged in the accident, count of people at the accident scene, injured individuals observed at the location, etc. Further, the drone may transmit the partially analyzed data to the server (101). The server (101) may then perform comprehensive analysis of the CCTV image and the image data captured by the drone. After analyzing, the server (101) may determine that an accident has occurred on the street based upon the comprehensive analysis of the CCTV image and the image data captured by the drone. Here, comprehensive analysis may relate to the server-side model determining and confirming the condition or severity of the accident event, injured individuals, damaged vehicles, etc. The server (101) may then transmit a signal to the alert notification module that lies within the server (101) or external to the server. The alert notification module then generates a corresponding alert that needs to be sent to the concerned user. The generated alert is then sent to the user (i.e. police in this case) of respective authority i.e. police station in this case. The police may then take remedial action immediately on the receipt of the notification from the real time event monitoring and alerting system. Hence, the event such as that of the accident alerts the police personnel immediately for taking corrective steps to avoid any further damages.
In another embodiment, the server (101) may receive a visual media from a primary image capturing unit (102). The primary image capturing unit may include the CCTV camera installed on the road within a city area that captures the real time visual media of the street. The CCTV camera may capture the image of a pothole on the road. Further, this pothole is observed to cause traffic jams in the city. The CCTV camera may communicate the captured pothole image to the server (101). In the subsequent image sent by the CCTV camera, the server (101) may notice that the traffic jams are occurring in the area where the pothole is prevalent. The server (101) may seek additional information by alerting a drone that within the vicinity of the pothole area. The drone may receive the signal from the server (101) and visit the pothole area for capturing further details of the pothole scene. Once captured, the drone may perform partial analysis of the captured visual media. Here, the partial analysis involves determining the depth of the pothole, whether the pothole is manually formed or formed due to bad road construction practices. In addition, such analysis may also include analyzing if the road or the pothole is filled with water, etc. Further, the drones transmit these partially analyzed visual media to the server (101). The server(101) may then perform comprehensive analysis of the received visual data from the CCTV camera and the drone. Here, comprehensive analysis may relate to the server-side model determining and confirming the condition or severity of the pothole, or the damaged road. In addition, such analysis may also include analyzing if the road or the pothole is filled with water, etc. Based on the analysis, the server (101) may determine that the event of pothole has occurred within the particular city area. The server (101) may further identify that this event is causing traffic jams within the city. The server (101), based upon the analysis, may send an alert to the municipal corporation operating in the pothole area. Further, the alerts may also be sent to the concerned personnel of the municipal corporation, who is handling the activities for that specific area. After receiving these alerts, the municipal corporation can take immediate steps for mending the pothole issue, which would in turn solve the problem of traffic jams.
In another embodiment, consider a theft scene that has just taken place at a location within the city. The near-by CCTV camera may capture the visual scenes of that location and sends it to the server (101). The captured visual scenes may involve video, audio or live feed of the theft scene. The server may determine the need for additional data with respect to the event of theft that just took place at the location. The server (101) may therefore alert a drone that is within a proximity of the theft spot. The drone receiving the alert may reach the location to capture supplementary visual media details and perform primary analysis of the captured visual media. Here, the primary analysis may involve determining the type of theft, wherein the type may include snatching an object by an individual from a subject, theft performed by a gang, theft performed in banks, etc. Further, the primary analysis may also include identifying the vehicle involved in the theft event, the type of weapons used at the theft scene or the number of people involved in the theft event. In addition, based on the determined theft type, the drones may send alerts to the server (101). Further, in case the drone analysis is not able to figure out the type of theft occurring at the location, it sends the CCTV visual media details to the server (101) for confirmation. The visual data is then transmitted to the server (101) by the drone. The server (101) may then perform comprehensive analysis of the visual data received from the CCTV and the drone. Here, comprehensive analysis of the visual data may involve determining the severity of the identified theft event, wherein the type of theft event may be determined by the drones in the primary analysis step. The severity of the theft event is identified by the server (101) by using Machine Learning Techniques, Computer Vision methods and Image/Video Analytics. Based on the analysis of this visual data, the sever (101) may determine the severity of the theft event at the location under consideration. The server (101) may then send alerts to the police station that is near-by to the theft spot. In other implementation, the alerts may also be sent to the police officer responsible for handling mishaps of the concerned location. The police station personnel can therefore take immediate action against the event of theft in real time by using the disclosed event monitoring and smart alerting system, thus avoiding any delay in handling the criminal event such as theft.
A person skilled in art would easily realize and appreciate that although the system (101) has been described above in various embodiments to monitor and alert real time events including detections of pothole, theft, accident, and the like, however, the present disclosure is not limited to any specific event and can further incorporate any other events wherein remedial actions is to be taken in real time as soon as such events are detected and reported to the server (101) in real time.
In an embodiment, though the present subject matter is explained considering that the system (100) facilitates real time monitoring of events via a server, it may be understood that the system (101) may facilitate real time monitoring of events via variety of user devices, such as, but not limited to, a portable computer, a personal digital assistance, a handheld device, a mobile, a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a mobile device, and the like. In one embodiment, the user device (104) herein belong to stakeholders/personal to be notified by the server (101) on occurrence of an event at respective locations.
In an embodiment, the user device (104) may be communicatively coupled with the server (101) via a network. In an embodiment, the network may be a wireless network such as Bluetooth, Wi-Fi, 3G, 4G/LTE and alike, a wired network or a combination thereof. The network can be accessed by the user device (104) using wired or wireless network connectivity means including updated communications technology. The user device (103) may be any electronic device, communication device, image capturing device, machine, software, automated computer program, a robot or a combination thereof.
The aforementioned computing devices may support communication over one or more types of networks in accordance with the described embodiments. For example, some computing devices and networks may support communications over a Wide Area Network (WAN), the Internet, a telephone network (e.g., analog, digital, POTS, PSTN, ISDN, xDSL), a mobile telephone network (e.g., CDMA, GSM, NDAC, TDMA, E-TDMA, NAMPS, WCDMA, CDMA-2000, UMTS, 3G, 4G), a radio network, a television network, a cable network, an optical network (e.g., PON), a satellite network (e.g., VSAT), a packet-switched network, a circuit-switched network, a public network, a private network, and/or other wired or wireless communications network configured to carry data. Computing devices and networks also may support wireless wide area network (WWAN) communications services including Internet access such as EV-DO, EV-DV, CDMA/1×RTT, GSM/GPRS, EDGE, HSDPA, HSUPA, and others.
The aforementioned computing devices and networks may support wireless local area network (WLAN) and/or wireless metropolitan area network (WMAN) data communications functionality in accordance with Institute of Electrical and Electronics Engineers (IEEE) standards, protocols, and variants such as IEEE 802.11 (“WiFi”), IEEE 802.16 (“WiMAX”), IEEE 802.20x (“Mobile-Fi”), and others. Computing devices and networks also may support short range communication such as a wireless personal area network (WPAN) communication, Bluetooth® data communication, infrared (IR) communication, near-field communication, electromagnetic induction (EMI) communication, passive or active RFID communication, micro-impulse radar (MIR), ultra-wide band (UWB) communication, automatic identification and data capture (AIDC) communication, and others.
In accordance with embodiments of the present disclosure, the visual media disclosed in the real-time event monitoring and alerting system (100) may include at least one of an image, a video, a multimedia and a CCTV footage.
In accordance with embodiments of the present disclosure, the disclosed primary image capturing unit (102) may include at least one of a camera, a CCTV camera, and any other electronic device capable of capturing the visual media.
In accordance with embodiments of the present disclosure, the disclosed the secondary image capturing unit (103) may include at least one of an unmanned aerial vehicle (UAV), a cellular phone, or a camera mounted on a mobile vehicle.
In accordance with embodiments of the present disclosure, the comprehensive analysis of the visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) includes mapping the captured visual media to the real time data obtained from a third-party navigation service API, e.g. Google Map® API.
In accordance with embodiments of the present disclosure, the comprehensive analysis of the visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) may further comprise applying one or more of machine learning techniques, Artificial Intelligence (AI) techniques and predictive modelling to learn and predict the type of the event occurring at the location. The disclosed embodiments may help in predicting the occurrence of the event in advance, which would allow the concerned authority to take necessary steps before the actual event occurs.
In one embodiment, machine learning techniques may involve Neural Networks, Classification Analysis. Further, computer vision methods may be used for image classification, localization and pattern detection before performing any kind of analysis of the captured visual media. Furthermore, a set of images may be created by classifying the images based on the determined type of theft, pothole, etc., wherein the set of images may be used for training the neural networks. In addition, the existing open source image database such as ImageNet may be used while performing machine learning analysis of the captured visual media. Further, in order to increase the speed and performance of the discussed machine learning methods the techniques such as You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), and Region-Based Fully Convolutional Networks (R-FCN) may be used.
In another embodiment, Deep Learning Tracker may be used for object tracking under computer vision technique. Deep Learning Tracker proposes offline pre-training and online fine-tuning of the image datasets. The process involves two steps: Step 1 may include off-line unsupervised pre-training the stacked denoising auto-encoder using large-scale natural image datasets to obtain the general object representation. Step 2 may include combining the code part of the pre-trained network with a classifier to get the classification network, and then using the positive and negative samples obtained from the initial frame to fine-tune the network, that can discriminate the current object and the background of the set of images.
In accordance with embodiments of the present disclosure, the type of event determined may comprise at least one of an accident, a pothole, a flood, and a theft.
In accordance with embodiments of the present disclosure, the appropriate action may comprise notifying at least one of an ambulance, near-by police station, an individual subscriber, an emergency contact personnel, and a government authority.
Fig 2 illustrates the server (101) and its components, in accordance with an embodiment of the present disclosure. The server (101) may comprise a processor (201), an Input/Output (I/O) interface (202), a memory (203). The memory may further include programmed instructions (204) and data (205). The data (205) may include a repository (206) and other data (207). The other data (207) amongst other things, serves as a repository for storing data processed, received, and generated by one or more components and programmed instructions.
The processor (201) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor (201) is configured to fetch and execute computer-readable instructions stored in the memory (203).
Referring to Fig. 2 an (I/O) interface (202) may include a variety of software and hardware interfaces, for example, a web interface, a user interface, a graphical user interface, and the like. The I/O interface (202) may allow the system (101) to interact with a user directly or through the electronic devices. Further, the I/O interface (202) may enable the system (101) to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface (202), can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, Bluetooth beacons, Bluetooth low energy or satellite. The I/O interface (202) may include one or more ports for connecting a number of devices to one another or to another server.
The memory (203) may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory (203) may further include programmed instructions (204) for performing particular tasks.
The programmed instructions may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The processor (201) may execute the programmed instructions (204) to perform various tasks/functionalities for real time monitoring and alerting of events.
In one embodiment, the processor (201) may execute the programmed instructions (204) for receiving the visual media content from the primary image capturing unit (102) and the secondary image capturing unit (103). Further, the processor (201) may execute the programmed instructions (204) for performing comprehensive analysis of the visual media content received from the primary image capturing unit (102) and the partially analyzed visual data received from the secondary image capturing unit (103). The processor (201) may execute the programmed instructions (204) for determining the type of event occurring at the specific location by analyzing the visual media content received from the primary image capturing unit (102) and the secondary image capturing unit (103), wherein the event may include accident, theft, presence of pothole leading to traffic jams, etc.
In one implementation, the processor (201) may execute the programmed instructions (204) for identifying and locating the secondary image capturing units (103), that are within the vicinity of the mishap location. Further, the processor (201) may execute the programmed instructions (204) for triggering the alert notification module for generating alerts when it receives the signal from the server (101), wherein the alert notification module is within or external to the server (101). Further, the alert notification module may be configured for sending alerts and notifying the concerned authority such as the police station, the municipal corporation, emergency personnel, etc. on receiving the signal from the server (101) concerning the occurrence of the event.
In one implementation, the processor (201) may execute the programmed instructions (204) for transmitting visual media from the primary image capturing unit (102) and the secondary image capturing unit (103) via transceiver module (209) within the server (101). Further, the transceiver module may be configured for providing communication between the primary image capturing unit (102), the secondary image capturing unit (103) and the server (101).
Figure 3 illustrates a steps (300) implemented by the system (100) in accordance with an embodiment of the present disclosure.
The method may comprise the first step of receiving a visual media from the primary image capturing unit. The primary image capturing unit may be configured to capture the visual media of a location, wherein the image capturing unit is installed at the location. The visual media is captured by the primary image capturing unit based upon occurrence of an event at the location. The method may comprise the second step of alerting a secondary image capturing unit that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analyzed visual media. The method may comprise the third step of receiving the partially analyzed visual media captured from the secondary image capturing unit. The method may comprise the fourth step of performing comprehensive analysis of the visual media and the partially analyzed visual media received from the primary image capturing unit and the secondary image capturing unit respectively. The method may comprise the fifth step of determining the type of event that occurred at the location based upon the comprehensive analysis of the visual media and the partially analyzed visual media. The method may comprise the sixth step of transmitting the alert notification to a user device for assisting a user of the user device to take an appropriate action based upon the type of the event determined at the location. In the last step, the concerned user or authority may take immediate action in response to the alert being sent by the event monitoring and alerting system.
In another exemplary embodiment, the processor (201) is configured to receive the visual media from the primary image capturing unit (102) and the secondary image capturing unit (103) periodically at a prescheduled time via a transceiver (210) based on a set of commands.
Figure 4 illustrates a method (400) implemented by the system (100) in accordance with an embodiment of the present disclosure.
The method may comprise the first step (401) of receiving a visual media from the primary image capturing unit. The primary image capturing unit may be configured to capture the visual media of a location, wherein the image capturing unit is installed at the location. The visual media is captured by the primary image capturing unit based upon occurrence of an event at the location. The method may comprise the second step (402) of alerting a secondary image capturing unit that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analyzed visual media. The method may comprise the third step (403) of receiving the partially analyzed visual media captured from the secondary image capturing unit. The method may comprise the fourth step (404) of performing comprehensive analysis of the visual media and the partially analyzed visual media received from the primary image capturing unit and the secondary image capturing unit respectively. The method may comprise the fifth step (405) of determining the type of event that occurred at the location based upon the comprehensive analysis of the visual media and the partially analyzed visual media. The method may comprise the sixth step (406) of transmitting the alert notification to a user device for assisting a user of the user device to take an appropriate action based upon the type of the event determined at the location. In the last step, the concerned user or authority may take immediate action in response to the alert being sent by the event monitoring and alerting system.
In one embodiment, communication modes may be a Bluetooth beacon network, BLE network, Bluetooth 4.0 network, Bluetooth 4.1 network, Bluetooth 4.2 network, wireless network, a wired network or a combination thereof. The communication mode can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The communication mode may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), Handshake Protocol, Full Duplex Communication Algorithm, and the like, to communicate with one another. Further the communication mode network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
In one exemplary embodiment, the real time event monitoring and alerting system may be implemented at locations such as streets, roads, highways, banks, etc.
The embodiments, examples and alternatives of the preceding paragraphs or the description and drawings, including any of their various aspects or respective individual features, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.
The foregoing description shall be interpreted as illustrative and not in any limiting sense. A person of ordinary skill in the art would understand that certain modifications could come within the scope of this disclosure. For limiting the scope of the invention, a subsequent Complete Specification be filed to determine the true scope and content of this disclosure.
Although implementations for the systems and methods for real-time event monitoring and alerting have been described in language specific to structural features and/or methods, it is to be understood that the approached claims are not necessarily limited to the specific features or methods described. Rather, the specific features and method are disclosed as examples of implementations for the real-time event monitoring and alerting system.
Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
• Providing a smart infrastructure for real-time event monitoring and alerting.
• Providing a smart system that reduces manual intervention of humans while handling critical events such as accidents, traffic jams, thefts, etc.in real-time;
• Providing a smart system that collects the visual media from each drone/CCTV at the location based on the occurrence of an event and performs analysis of the visual media. The system further, sends alerts to the concerned authority or personnel considering the severity of the event.
• Providing a smart system that applies one or more of machine learning techniques, Artificial Intelligence (AI) techniques and predictive modelling to learn and predict the type of the event occurring at the location. Hence, the disclosed smart system may help in predicting the occurrence of the event in advance, wherein event may include happenings on the roads, flood scenarios, presence of potholes, traffic jams, accidents, thefts, etc. This would allow the concerned authority to take necessary steps or remedial action before the actual event occurs.
,CLAIMS:We Claim:
1. A real-time event monitoring and alerting system (100), comprising:
a primary image capturing unit (102);
a secondary image capturing unit (103);
a user device (104); and
a server (101) comprising a memory (203) coupled with a processor (201), wherein the processor (201) is configured to execute programmed instructions stored in the memory (203) for:
receiving a visual media from a primary image capturing unit (102), wherein the primary image capturing unit (102) is configured to capture the visual media of a location, wherein the primary image capturing unit (102) is installed at the location, and wherein the visual media is captured by the primary image capturing unit (102) based upon occurrence of an event at the location;
alerting a secondary image capturing unit (103) that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analysed visual media;
receiving the partially analysed visual media captured from the secondary image capturing unit (103);
performing comprehensive analysis of the visual media and the partially analysed visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) respectively;
determining a type of event occurred at the location based upon the comprehensive analysis of the visual media and the partially analysed visual media;
transmitting a signal to an alert notification unit, wherein the alert notification unit, upon receipt of the signal, is configured to generate an alert notification and transmit the alert notification to a user device (104) for assisting a user of the user device (104) to take an appropriate action based upon the type of the event determined at the location.
2. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the visual media is at least one of an image, a video, a multimedia and a CCTV footage.
3. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the primary image capturing unit (102) is at least one of a camera, a CCTV camera, and any other electronic device capable of capturing the visual media.
4. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the secondary image capturing unit (103) is at least one of an unmanned aerial vehicle (UAV), a cellular phone, or a camera mounted on a mobile vehicle.
5. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the processor (201) is configured to receive the visual media from the primary image capturing unit (102) and the secondary image capturing unit (103) periodically at a prescheduled time via a transceiver (210) based on a set of commands.
6. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the comprehensive analysis of the visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) includes mapping the captured visual media to the real time data obtained from a third-party navigation service API.
7. The real-time event monitoring and alerting system (100) as claimed in 1, wherein the comprehensive analysis of the visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) comprises applying one or more of machine learning techniques, Artificial Intelligence (AI) techniques and predictive modelling to learn and predict the type of the event occurring at the location.
8. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the type of event determined comprises at least one of an accident, a pothole, a flood, and a theft.
9. The real-time event monitoring and alerting system (100) as claimed in claim 1, wherein the signal is transmitted to the alert notification unit present within the server (101) or external thereof, and wherein the appropriate action comprises notifying at least one of an ambulance, near-by police station, an individual subscriber, an emergency contact personnel, and a government authority.
10. A method for real-time event monitoring and alerting, the method comprising:
receiving a visual media from a primary image capturing unit (102), wherein the primary image capturing unit (102) is configured to capture the visual media of a location, wherein the primary image capturing unit (102) is installed at the location, and wherein the visual media is captured by the primary image capturing unit (102) based upon occurrence of an event at the location;
alerting a secondary image capturing unit (103) that is within a predefined distance from the location to capture a supplementary visual media associated to the visual media and perform partial analysis of the supplementary visual media to obtain a partially analysed visual media;
receiving the partially analysed visual media captured from the secondary image capturing unit (103);
performing comprehensive analysis of the visual media and the partially analysed visual media received from the primary image capturing unit (102) and the secondary image capturing unit (103) respectively;
determining type of event occurred at the location based upon the comprehensive analysis of the visual media and the partially analysed visual media;
transmitting a signal to an alert notification unit within the server (101) or external thereof, wherein the alert notification unit, upon receipt of the signal, is configured to generate an alert notification and transmit the alert notification to a user device (104) for assisting a user of the user device (104) to take an appropriate action based upon the type of the event determined at the location.
Dated this 26th of December 2019
| # | Name | Date |
|---|---|---|
| 1 | 201821049818-STATEMENT OF UNDERTAKING (FORM 3) [29-12-2018(online)].pdf | 2018-12-29 |
| 2 | 201821049818-PROVISIONAL SPECIFICATION [29-12-2018(online)].pdf | 2018-12-29 |
| 3 | 201821049818-PROOF OF RIGHT [29-12-2018(online)].pdf | 2018-12-29 |
| 4 | 201821049818-POWER OF AUTHORITY [29-12-2018(online)].pdf | 2018-12-29 |
| 5 | 201821049818-FORM 1 [29-12-2018(online)].pdf | 2018-12-29 |
| 6 | 201821049818-DRAWINGS [29-12-2018(online)].pdf | 2018-12-29 |
| 7 | 201821049818-DECLARATION OF INVENTORSHIP (FORM 5) [29-12-2018(online)].pdf | 2018-12-29 |
| 8 | 201821049818-Proof of Right (MANDATORY) [07-05-2019(online)].pdf | 2019-05-07 |
| 9 | 201821049818-RELEVANT DOCUMENTS [26-12-2019(online)].pdf | 2019-12-26 |
| 10 | 201821049818-FORM 18 [26-12-2019(online)].pdf | 2019-12-26 |
| 11 | 201821049818-FORM 13 [26-12-2019(online)].pdf | 2019-12-26 |
| 12 | 201821049818-ENDORSEMENT BY INVENTORS [26-12-2019(online)].pdf | 2019-12-26 |
| 13 | 201821049818-DRAWING [26-12-2019(online)].pdf | 2019-12-26 |
| 14 | 201821049818-CORRESPONDENCE-OTHERS [26-12-2019(online)].pdf | 2019-12-26 |
| 15 | 201821049818-COMPLETE SPECIFICATION [26-12-2019(online)].pdf | 2019-12-26 |
| 16 | Abstract1.jpg | 2019-12-28 |
| 17 | 201821049818-ORIGINAL UR 6(1A) FORM 1-080519.pdf | 2019-12-31 |
| 18 | 201821049818-FORM 3 [17-04-2020(online)].pdf | 2020-04-17 |
| 19 | 201821049818-FER.pdf | 2021-10-18 |
| 20 | 201821049818-OTHERS [15-11-2021(online)].pdf | 2021-11-15 |
| 21 | 201821049818-FER_SER_REPLY [15-11-2021(online)].pdf | 2021-11-15 |
| 22 | 201821049818-CLAIMS [15-11-2021(online)].pdf | 2021-11-15 |
| 23 | 201821049818-US(14)-HearingNotice-(HearingDate-03-03-2025).pdf | 2025-02-04 |
| 24 | 201821049818-Correspondence to notify the Controller [25-02-2025(online)].pdf | 2025-02-25 |
| 1 | 2021-06-2321-50-11E_23-06-2021.pdf |