Abstract: ABSTRACT DIGITAL TWIN-BASED SYSTEM AND METHOD FOR DISASTER MANAGEMENT A system (100) and a method (1000) for rescue management are disclosed. The system (100) includes at least one input device (102), at least one analysis unit (104), and at least one output device (106). The at least one input device (102) is configured to receive disaster data input and location tracking data input from at least one user. The at least one input device (102) is associated with the at least one user. The at least one analysis unit (104) is configured to analyze the disaster data input and the location tracking data input from the at least one input device (102) to generate data output comprising visual data and route guidance data. The at least one output device (106) is configured to indicate the visual information and guidance information to the at least one user. (FIG. 1)
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION-
DIGITAL TWIN-BASED SYSTEM AND METHOD FOR DISASTER MANAGEMENT
2. APPLICANT(S)
1. NAME- 10D DESIGN SERVICES OPC PRIVATE LIMITED
NATIONALITY- Indian
ADDRESS- BC 2, SUPRIYA GARDENS CHSL, NAGRAS ROAD, AUNDH, PUNE-411007, MAHARASHTRA
3. PREAMBLE TO THE DESCRIPTION
PROVISIONAL
The following describes the invention. COMPLETE
The following specification particularly describes the invention and the manner in which it is to be performed.
DIGITAL TWIN-BASED SYSTEM AND METHOD FOR DISASTER MANAGEMENT
TECHNICAL FIELD
[0001] The present invention relates to a disaster management and, more particularly, the present invention relates to a digital twin-based system and method for managing disasters such as fire, smoke, gas (leak), etc.
BACKGROUND
[0002] Disasters, such as fire, smoke, gas leakage, and other mass emergencies represent significant threats to mankind in terms of mortality, injuries, chaotic reaction of civilians, etc., Due to the advancement in technology, the use of (Internet Of Things) IoT is gaining new insights in many applications. The IoT is a network of physical objects embedded with sensors, software, and other technologies to connect and exchange data with other devices and systems over the Internet.
[0003] Existing prior art systems, lack integration of detector data, building data, real-time disaster data, and data suggested by the rescue personnel will provide effective and faster rescue operations. To provide effective disaster management, there is a need to integrate detector data, building data, real-time disaster data, and data suggested by the rescue personnel.
SUMMARY OF THE INVENTION
[0004] Accordingly, the present invention provides, a 3D digital twin-based disaster management system, primarily designed to enhance fire, smoke, and gas leak response operations. This system integrates IoT devices with digital twin technology to create a virtual representation of a building, helping both occupants and rescue personnel navigate emergency situations efficiently. The system provides real-time visualization of fire and smoke locations in 3D models, 2D schematics, and floor plans, assisting in identifying escape routes, fire-fighting equipment locations, and trapped occupants.
[0005] For occupants, the system offers an interactive QR code-based access to a web portal, guiding them toward the nearest escape routes, fire staircases, lifts, and refuge areas, while also displaying the location of fire extinguishers and hose reels.
[0006] For firefighters and rescue teams, it offers strategic navigation assistance to reach affected areas, locate hydrants and booster connections, and understand the building’s fire safety features, such as material composition, structural integrity, and access routes.
[0007] The system is built using open-source 3D development platforms like Unreal Engine, ReactJS, Three.js, Python, Flask, Hive MQ, AWS and it employs MQTT protocol for efficient communication between hardware and software components. The IoT hardware includes ESP8266-based microcontrollers, infrared flame sensors, smoke and gas sensors, and LoRa wireless technology for larger areas. Additionally, QR codes at entrances and inside buildings help identify occupants' locations and assist firefighters in planning their approach.
[0008] Beyond fire emergencies, the system can be adapted to other disasters in commercial, industrial, healthcare, educational, and residential buildings, making it a versatile and scalable disaster response solution. It also integrates with existing fire management systems using HTTP, JSON, or MQTT protocols, ensuring real-time data sharing and seamless interoperability with other safety infrastructure.
[0009] This system is designed for rescue management in case of disasters such as fire, smoke, or gas leaks in a building. It consists of three key components: input devices, an analysis unit, and output devices, all working together to help occupants evacuate and assist rescue teams in responding effectively.
[00010] The system starts by collecting real-time disaster data and user location data through input devices (e.g., smartphones, tablets, or control panels). The disaster data is gathered via IoT microcontrollers, which receive signals from sensors like flame detectors, smoke sensors, and gas (ppm) sensors installed inside the building. These sensors help identify fire, smoke, or gas leaks in real time.
[00011] Once the data is collected, it is sent to the analysis unit, which could be a server, cloud-based processing unit, or an edge computing device. This unit processes the information to determine: where the disaster is happening within the building, where the occupants are located inside the building, what escape routes are available for safe evacuation, and what access routes are best for rescue teams to enter the building.
[00012] The system then generates visual guidance using a 3D digital twin—a virtual model of the building. This includes 3D and 2D floor plans, escape routes, and locations of firefighting equipment.
[00013] The processed information is then sent to output devices, such as smartphones, AR headsets, LED screens, or control room monitors. Occupants receive real-time escape guidance, while rescue teams get entry route directions to mitigate the disaster.
[00014] The devices communicate using MQTT, HTTP, or JSON protocols, ensuring fast and efficient data transfer.
[00015] The system can be accessed through QR codes placed at strategic locations (e.g., fire exits, staircases, and lift areas). Scanning a QR code provides instant escape route guidance to users.
[00016] The system can send data to firefighters and rescue teams remotely, helping them strategize before entering the affected area. Data updates dynamically in real-time, meaning as conditions change (e.g., fire spreads), the system adjusts guidance accordingly.
[00017] The analysis unit includes AI, which can predict fire spread patterns and recommend better evacuation or firefighting strategies.
[00018] The system supports a variety of hardware components, including ESP8266, ESP32, Raspberry Pi, Arduino, and LoRa/4G/5G-enabled microcontrollers, along with various sensors like flame detectors, smoke sensors, and air quality monitors.
[00019] As compared to the conventional art, the system introduces a unique combination of 3D digital twin technology, IoT-based real-time disaster monitoring, and AI-driven evacuation planning. While individual components such as fire sensors, IoT microcontrollers, and evacuation guidance systems may be known, the specific integration of these elements into a real-time, dynamically updating rescue management system is novel. The inclusion of a virtual building model (3D digital twin) for escape and entry guidance, in combination with real-time sensor data and AI-based fire spread prediction, creates a new technical solution not found in prior systems. The use of QR codes for location tracking and access to evacuation guidance further enhances the novelty, making the system more accessible and efficient.
[00020] This system advances traditional fire safety systems, which typically rely on static evacuation maps, manual alerts, and non-dynamic guidance. Instead of relying on pre-existing escape plans, this system uses real-time sensor data and AI analysis to generate personalized evacuation routes based on the user’s location and the evolving disaster scenario. The integration of IoT microcontrollers, cloud-based analysis, edge computing, and AI-driven predictions makes this system significantly more responsive, adaptive, and intelligent compared to conventional fire alarm and evacuation systems.
[00021] The system provides a unique approach to rescue management by combining IoT, real-time data analysis, 3D virtual modeling, and AI-driven decision-making in a way that is not straightforward or obvious from prior art. Existing systems might use sensors for detection or QR codes for information access, but their combined application for real-time, adaptive evacuation guidance—where the system continuously updates escape routes based on fire spread and occupant movement—is non-trivial. A skilled person in the field of fire safety or IoT systems would not immediately arrive at this specific combination and real-time dynamic approach without inventive steps.
[00022] The system produces a measurable and concrete technical effect by enhancing disaster response efficiency, reducing evacuation time, improving firefighter access, and minimizing human casualties. The real-time location tracking, dynamically updated guidance, and AI-powered fire spread prediction significantly improve emergency response decision-making. Additionally, the use of IoT and edge computing ensures that the system operates efficiently even in network-constrained environments, making it technically robust and scalable for different types of buildings and infrastructures.
[00023] Thus, the claimed system is novel due to its unique combination of technologies, technically advanced due to its real-time adaptability and AI-driven predictions, and non-obvious because it solves evacuation and disaster response problems in a way that is not straightforward from prior art. The system achieves a significant technical effect by providing faster, safer, and more efficient evacuation and rescue management through a highly intelligent and automated decision-making process.
BRIEF DESCRIPTION OF DRAWINGS
[0024] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0025] The diagrams are for illustration only, which thus is not a limitation of the present disclosure, and wherein:
[0026] FIG. 1 illustrates a block diagram of a system for rescue management, in accordance with a preferred embodiment of the invention.
[0027] FIG. 2 illustrates a schematic diagram of disaster data input to an analysis unit of the system for rescue management, in accordance with a preferred embodiment of the invention.
[0028] FIG. 3 illustrates a schematic diagram of location tracking data input to an analysis unit of the system for rescue management, in accordance with a preferred embodiment of the invention.
[0029] FIG. 4 illustrates a schematic diagram of data output from an analysis unit of the system for rescue management, in accordance with a preferred embodiment of the invention.
[0030] FIG. 5 to FIG. 9 illustrate a three-dimensional representation of data output from an analysis unit of the system for rescue management, in accordance with a preferred embodiment of the invention.
[0031] FIG. 10 illustrates a flow chart depicting a method for rescue management, in accordance with a preferred embodiment of the invention.
[0032] FIG. 11 illustrates an exemplary block diagram of a system for rescue management, in accordance with a preferred embodiment of the invention.
[0033] FIG. 12 illustrates an exemplary architecture of the rescue management system, in accordance with a preferred embodiment of the invention.
[0034] FIG. 13 illustrates an exemplary a simplified communication architecture, in accordance with a preferred embodiment of the invention.
[0035] FIG. 14 illustrates an exemplary location tracking mechanism within the system, in accordance with a preferred embodiment of the invention.
[0036] FIG. 15 illustrates how the analysis unit transmits route guidance and disaster alerts to various user interfaces, including smartphones, AR headsets, and LED displays, in accordance with a preferred embodiment of the invention.
[0037] FIG. 16A illustrates an exemplary visual guidance system generated by the analysis unit, in accordance with a preferred embodiment of the invention.
[0038] FIG. 16B illustrates an exemplary he earlier visualizations by overlaying firefighting resources (extinguishers, hose reels) and real-time markers for occupants, rescuers, and hazard zones on a 2D/3D floorplan, in accordance with a preferred embodiment of the invention.
[0039] FIG. 17 illustrates an exemplary a stepwise flow of the method, in accordance with a preferred embodiment of the invention.
[0040] FIG. 18 illustrates an exemplary flow diagram that shows how the system responds when a fire-related event is detected, in accordance with a preferred embodiment of the invention.
DETAILED DESCRIPTION OF DRAWINGS
[0041] The present invention is a disaster (Fire, smoke, gas (leak) etc.) location information system (which floor/s and or which location/s), using data from IoT or IoT Compatible existing fire management system, provides location of Fire, smoke, gas (leak) etc. for rescue personnel (example Fire fighters) to be used as additional guidance to plan the rescue operation and also provide escape guidance to the persons stuck inside the building, using QR Codes which open a web-based portal for ease of access to the application during an emergency for buildings such as commercial, industrial, airports, hospitals, educational institutions, mixed use residential and other type of buildings including campuses.
[0042] The present invention can be used for all disasters of any building, campus which can be connected by IoT. A disaster is defined as a "sudden or great misfortune" or simply "any unfortunate event." More precisely, a disaster is "an event whose timing is unexpected and whose consequences are seriously destructive. A hazard is anything which can harm or damage someone or something. Both “danger” and “hazard” are nouns that can mean a dangerous or life-threatening situation. Risk is “the possibility that something bad or unpleasant (such as injury or a loss) will happen. Fire disasters have significant impact on densely populated urban areas as well as in the forests. Fire can spread over vertical surfaces, horizontal surfaces, inside the porous materials, over the flammable liquid, inflammable gas-air mixture depending upon various favourable scenarios.
[0043] The term 3D represents three dimensional (x, y, z) (length, breadth, height). The term 2D represent Two dimensional (two of any three dimension). represent IoT, that is Internet of Things. The term QR Codes represents Quick Response code. A QR code is a type of two-dimensional matrix barcode. Once a static QR code is created and printed or displayed, it will always lead to the same destination, even if the code is scanned months or years later. Technically all phone cameras are compatible with QR codes, however some older phones may require an app for you to scan them as mentioned above.
[0044] The term MQTT (Message Queuing Telemetry Transport) is a standards-based messaging protocol, or set of rules, used for machine-to machine communication. Smart sensors, wearables, and other Internet of Things (IoT) devices typically have to transmit and receive data over a resource-constrained network with limited bandwidth. These IoT devices use MQTT for data transmission, as it is easy to implement and can communicate IoT data efficiently. MQTT supports messaging between devices to the cloud and the cloud to the device.
[0045] The Hypertext Transfer Protocol (HTTP) is the foundation of the World Wide Web, and is used to load webpages using hypertext links. HTTP is an application layer protocol designed to transfer information between networked devices and runs on top of other layers of the network protocol stack. The term JavaScript Object Notation (JSON) is a standard text-based format for representing structured data based on JavaScript object syntax. It is commonly used for transmitting data in web applications (e.g., sending some data from the server to the client, so it can be displayed on a web page, or vice versa. When sending data to a web server, the data has to be a string. JSON is a way to format that string so that it is easy for the web server to understand. The string is sent over HTTP, which is the standard protocol for communication between web servers and web browsers.
[0046] FIG. 1 illustrates a block diagram of a system 100 for rescue management, in accordance with a preferred embodiment of the invention. The system 100 includes at least one input device 102, at least one analysis unit 104, and at least one output device 106. The at least one input device 102 configured to receive disaster data input and location tracking data input from at least one user.
[0047] The at least one user includes but is not limited to at least one administrator of the system 100, at least one rescue personnel, and at least one occupant. The at least one administrator of the system 100 manages the security, authentication, policies, rules, conditions, etc. upon which the rescue operation and management are performed. The at least one input device 102 is associated with the at least one user. The at least one input device 102 includes at least one sensor 108 installed in a rescue location, at least one occupant device 112 associated with at least one occupant in the rescue location, at least one rescue personnel device 114 associated with at least one rescue personnel, and at least one administrator device 110 associated with at least one administrator of the system 100.
[0048] The at least one analysis unit 104 includes a MQTT component 120.
[0049] The MQTT component 120 is a lightweight, publish-subscribe, machine-to-machine network protocol for message queue/message queuing service. It is designed for connections with remote locations (e.g., disaster locations) that have devices with resource constraints or limited network bandwidth, such as in the Internet of Things. The at least one administrator device 110 can provide, the three dimensional (x,y,z, i.e., length, breadth, height) of the rescue location including the floor plan, number of floors, details of fire extinguishers, details of at least one sensors 108 (e.g., smoke sensor, water sensor, fire sensor, etc.), exit paths, stairs, lifts, etc. The rescue location information can be either stored in a database associated with the system 100 or can be received from at least one interested person (e.g., the at least one administrator, rescue personnel, occupant, etc.), as soon as the disaster is reported. The at least one sensor 108 (e.g., smoke sensor, water sensor, fire sensor, etc.) is configured to provide the disaster data input of the rescue location upon detecting a disaster in the rescue location. The at least one occupant device 112, the at least one rescue personnel device 114, and at least one administrator device 110 are configured to provide respective location tracking data input. As shown in FIG. 1, the occupant device 112 includes an occupant location tracker 116 configured to track the location of the occupant. Similarly, the rescue personnel device 114 includes a rescue personnel location tracker 118 configured to track the location of the rescue personnel. The tracking data from the occupant location tracker 116 and the rescue personnel location tracker 118 are together referred to as location tracking data input 124. The location tracking data input 124 includes location map and plan of the rescue location, location information of the rescue personnel, location information of the occupant. The analyzing unit 104 includes at least one processor 126 configured to analyze the disaster data input and the location data input from the at least one input device 102 to generate data input 122 (from the at least one administrator, the at least one rescue personnel, the at least one occupant) comprising visual data and route guidance data.
[0050] The at least one output device 106 is configured to indicate the visual data 128 and route guidance data 130 to the at least one user. The at least one output device 106 is associated with the at least one input device 102. The visual data can be displayed in the at least one input device 102 in the form of visual indicators 132, including but not limited to a three-dimensional map, two-dimensional map, floorplan, all details of objects, rescue device, staircases, exit points, lift and other details of the rescue location. The route guidance data 130 can be sent to the at least one occupant in the form of occupant indicator 134 (e.g., arrows) and also to at least one rescue personnel in the form of rescue personnel indicator 136 (e.g., arrows) in the rescue location.
[0051] FIG. 2 illustrates a schematic diagram of data input 122 of the disaster to the analysis unit 104 of the system 100 for rescue management, in accordance with a preferred embodiment of the invention. The at least one sensor 108 can send disaster-related information, that is detection of smoke, fire, etc. in the rescue location to the IoT device 202. The MQTT server 204 receives the disaster-related information from the IoT device 202 and sends the disaster-related information to the MQTT component 120 in the analysis unit 104.
[0052] The at least one occupant inside the disaster premises can use the occupant device 112 to scan the QR code 208. Scanning the QR code 208 may link (see, 210) the occupant device 112 to at least one application portal that connects to the analyzing unit (via a web application) and the occupant can take install photos and videos, and add comments in the photos or videos (see, 212). The at least one application portal also enquires about the current situation of the occupant inside the disaster location. In one embodiment, the pop-up showing “Are you struck?/Need help?/” (see, 214) in the form of YES/NO questions are asked to the occupants via the occupant device 112. After receiving the disaster occurrence information, the at least one administrator can approve (see, 216) such request and initiate the integration, analysis, planning, and route suggestions to the at least one user.
[0053] In one embodiment, the at least one occupant device 112 is configured to scan a code (e.g., QR code 208) associated with the rescue location to connect with the analysis unit 104. Then, the at least one occupant can capture one or more pictures and videos of the rescue location and transmit/upload the same to the analysis unit 104 upon approval by the at least one administrator of the system 100. In one embodiment, the one or more pictures and videos of the rescue location correspond to the disaster data input.
[0054] FIG. 3 illustrates a schematic diagram of location tracking data input to the analysis unit 104 of the system 100 for rescue management, in accordance with a preferred embodiment of the invention. The occupant device 112 can be used by the at least one occupant to scan the QR code 304 using a code scanning application installed in the occupant device 112. In one embodiment, the QR code 304 used for sharing the location (see, 302) of the disaster, and the QR code 208 used for reporting the disaster are one and the same. The rescue personnel device 114 can be used by the at least one rescue personnel to scan the QR code using a code scanning application installed in the occupant device 112. A QR code 310 can be positioned at the entrance of the rescue building, which can be used by the at least one rescue personnel for performing at least one rescue operation based on the type of the disaster. The location information (see, 304) from the at least one rescue personnel and the location information (see, 302) from at least one occupant are collectively referred to as location tracking input 124.
[0055] FIG. 4 illustrates a schematic diagram of data output (i.e., visual data 128 and route guidance data 130) from the analysis unit 104 of the system 100 for rescue management, in accordance with a preferred embodiment of the invention. The at least one output device 106 includes at least one visual indicator in the rescue location to display the visual data 128 and route guidance data 130 to the at least one occupant in the rescue location. The visual information includes two-dimensional and three-dimensional data of the rescue location. In one embodiment, the visual data 128 is shown in the at least one occupant device 112 as the three-dimensional information. The three-dimensional information includes disaster-based animation in 3D model indicating location of the disaster (see, 408).
[0056] In another embodiment, the visual data 128 is shown as flashing model elements and visual callouts displaying location of firefighting equipment (see, 410).
[0057] In another embodiment, the visual data 128 is shown as two-dimensional floorplans and sections (see, 412).
[0058] In another embodiment, the visual data 128 is shown text notification (see, 414).
[0059] In case of occupants (see, 416), the system 100 check (see, 418) whether there exists disaster (e.g. fire) in the path of the closet fire escape or not. In case of existence of disaster, arrow based guidance (see, 420) to next best fire escape is shown in the 3D model displayed on the occupant device 112. In case of non-existence of disaster, arrow-based guidance (see, 422) to closest best fire escape is shown in the 3D model displayed on the occupant device 112.
[0060] In case of fire rescue personnel (see, 424), arrow-based route guidance to disaster location is displayed on the fire rescue personnel device 114 (see, 426). Further, location of the at least one occupant that need help is displayed to the fire rescue personnel in 3D model (see, 428).
[0061] Thus, the visual data 128 and route guidance data 130 includes two-dimensional data (in the form of 3D model), three-dimensional data, floorplan, text information, and audio information of the rescue location.
[0062] FIG. 5 illustrate a three-dimensional representation 500 of data output (i.e., visualization data 128 and route guidance data 130. Using the at least one rescue professional device 114, the three-dimensional representation 500 showing the external structure of multi-storied structure of the rescue or disaster location is displayed as shown in FIG. 5. The exact floor where disaster (see, fire 502) has occurred can be shown in this three-dimensional representation 500.
[0063] FIG. 6 illustrate a three-dimensional representation 600 of data output (i.e., visualization data 128 and route guidance data 130. Using the at least one rescue professional device 114, the three-dimensional representation 500 showing the internal structure of multi-storied structure of the rescue or disaster location is displayed as shown in FIG. 6. The exact floor where disaster has occurred can be shown in this three-dimensional representation 600.
[0064] FIG. 7 illustrate a three-dimensional representation 700 of data output (i.e., visualization data 128 and route guidance data 130. Using the at least one rescue professional device 114, the three-dimensional representation 700 showing arrow-based route guidance to the disaster location for occupants in the rescue location.
[0065] FIG. 8 illustrate a three-dimensional representation 800 of data output (i.e., visualization data 128 and route guidance data 130. Using the at least one rescue professional device 114, the three-dimensional representation 800 showing floorplan of the disaster location for occupants in the rescue location.
[0066] FIG. 9 illustrate a three-dimensional representation 900 of data output (i.e., visualization data 128 and route guidance data 130. Using the at least one rescue professional device 114, the three-dimensional representation 900 showing floorplan of the disaster location with various rescue equipment 902 location in the rescue location.
[0067] To summarize the embodiments, a system (100) for rescue management is provided. The system (100) includes at least one input device (102) configured to receive disaster data input and location tracking data input from at least one user, wherein the at least one input device (102) is associated with the at least one user; at least one analysis unit (104) configured to analyze the disaster data input and the location tracking data input from the at least one input device (102) to generate data output comprising visual data and route guidance data; and at least one output device (106) configured to indicate the visual information and guidance information to the at least one user, wherein the at least one output device (106) is associated with the at least one input device (102).
[0068] The at least one input device (102) comprises at least one sensor (108) installed in a rescue location, at least one occupant device (112) associated with at least one occupant in the rescue location, at least one rescue personnel device (114) associated with at least one rescue personnel, and at least one administrator device (110) associated with at least one administrator of the system (100).
[0069] The at least one sensor (108) is configured to provide the disaster data input of the rescue location upon detecting a disaster in the rescue location.
[0070] The at least one occupant device (112), the at least one rescue personnel device (114), and at least one administrator device (110) are configured to provide respective location data input, the location tracking data input comprises location map and plan of the rescue location, location information of the rescue personnel, location information of the occupant.
[0071] The at least one occupant device (112) is configured to scan a code associated with the rescue location to connect with the analysis unit (104); capture one or more pictures and videos of the rescue location; transmit one or more pictures and videos to the analysis unit (104) upon approval by the at least one administrator of the system (100), wherein the one or more pictures and videos of the rescue location correspond to the disaster data input.
[0072] The at least one output device (106) comprises at least one visual indicator in the rescue location to display the visual information and guidance information to the at least one occupant in the rescue location.
[0073] The visual information and guidance information comprises two-dimensional and three-dimensional data of the rescue location.
[0074] The visual information comprises two-dimensional data, three-dimensional data, floorplan, text information, and audio information of the rescue location.
[0075] Further, a method (100) for rescue management is disclosed. The method includes the steps of:
[0076] receiving, by at least one input device (102), disaster data input and location tracking data input from at least one user, wherein the at least one input device (102) is associated with the at least one user;
[0077] analyzing, by at least one analysis unit (104), the disaster data input and the location tracking data input from the at least one input device (102) to generate data output comprising visual information and guidance information; and
[0078] indicating, by at least one output device (106), the visual information and guidance information to the at least one user, wherein the at least one display unit (106) is associated with the at least one input device (102), wherein the at least one input device (102) comprises at least one sensor (108) installed in a rescue location, at least one occupant device (110) associated with at least one occupant in the rescue location, at least one rescue personnel device (114) associated with at least one rescue personnel, and at least one administrator device (110) associated with at least one administrator of the system (100).
[0079] The at least one sensor (108) is configured to provide the disaster data input of the rescue location upon detecting a disaster in the rescue location.
[0080] FIG. 10 illustrates a flow chart depicting a method 1000 for rescue management, in accordance with a preferred embodiment of the invention.
[0081] The step 1002 of the method 1000 includes receiving, by at least one input device 102, disaster data input and location tracking data input from at least one user. The at least one input device 102 is associated with the at least one user.
[0082] The step 1004 of the method 1000 includes analyzing, by at least one analysis unit 104, the disaster data input and the location tracking data input from the at least one input device 102 to generate data output comprising visual information and guidance information.
[0083] The step 1002 of the method 1000 includes indicating, by at least one output device 106, the visual information and guidance information to the at least one user, wherein the at least one output device 106 is associated with the at least one input device 102.
[0084] The at least one input device 102 includes at least one sensor 108 installed in a rescue location, at least one occupant device 112 associated with at least one occupant in the rescue location, at least one rescue personnel device 114 associated with at least one rescue personnel, and at least one administrator device 110 associated with at least one administrator of the system 100. The at least one sensor 108 is configured to provide the disaster data input of the rescue location upon detecting a disaster in the rescue location.
[0085] The present invention serves both occupants and firefighting/rescue personnel In case of occupants, the present invention provides the following: 1) displays location of fire/smoke/gas in a 3D Model as well as 3D/2D; 2) displays section and floor plan view; 3) Shows nearest escape routes such as fire staircase, fire lifts, refuge; 4) display areas based on location of the user and location of fire/gas/smoke; and 5) displays location of nearest firefighting equipment such as extinguishers, fire hose reels etc.
[0086] In case of firefighting and rescue personnel, the present invention provides the following: 1) displays location of fire/smoke/gas in a 3D model as well as 3D/2D section and floor plan view; 2) displays guidance to reach the affected areas from the entrance; 3) displays location of nearest firefighting equipment such as extinguishers, fire hose reels etc near the affected areas; 4) provide guidance for location of trapped occupants, if the occupants scan the QR code.
[0087] The information of this system 100 which helps the disaster management are using fire as an example are:
[0088] Where in the building is the fire? Fire's Point of Origin: Where the fire has occurred in the building with visualisation of the Building 3D Model, Near the Fire's Point of Origin: Proximate to the fire’s point of origin – generally within the same smoke compartment in which the fire originated. Away from the Fire's Point of Origin: The parts of the building that are remote from the fire, separated by firewalls, smoke doors, or smoke compartments.
[0089] What time the fire has occurred “The building is your enemy, know your enemy.” Building Information Model (BIM): The model can be populated with information regarding material of construction, the designed fire ratings, as follows:
[0090] Arrival points on site and hardstand areas for ascending fire brigade vehicles/appliances.
[0091] Clear internal access routes and a prominent location for fire indicator panels in order to determine the seat of the fire and suitable location for staging operations.
[0092] Clear and simple information about the building occupants, different tenancies, types of fire protection equipment/systems installed, with site specific information, etc.
[0093] Location of and access to external hydrants and boosters and reliable water supplies.
[0094] Stair construction and protected routes for evacuation of occupants and fire fighter access to each floor and travel distances and fire ratings.
[0095] Location of internal hydrants and sprinkler systems controls.
[0096] Adequate fire resistance levels that are suitable for the occupancy type and number of occupants to ensure no structural collapse.
[0097] Clear instructions to operate of smoke control systems, particularly complex ones.
[0098] Information on any special hazards, such as combustible materials, electric vehicles, battery banks, electric recharging facilities and others.
[0099] Sufficient ventilation in larger buildings (i.e. warehouses, manufacturing, process etc.); and
[00100] Emergency lighting and exit signs to assist in way finding.
[00101] Interpret occupant location by occupants if they scan the QR Code.
[00102] In an embodiment, FIG. 10 shows the method (1000) for rescue management begins with the receiving step (1002), where an input device (102) collects disaster data input and location tracking data input from a user device (such as a smartphone, tablet, wearable device, or fixed control panel). The disaster data is obtained from an IoT microcontroller (such as ESP8266, ESP32, Raspberry Pi, or Arduino), which detects fire, smoke, or gas leaks inside a building. This detection is based on real-time sensor readings from a network of sensors (1106-1, 1106-2, …., 1106-N), including flame sensors, smoke detectors, gas (ppm) sensors, temperature sensors, humidity sensors, and air quality sensors. These sensors are strategically placed throughout the building to monitor environmental conditions continuously and detect anomalies that indicate the presence of fire, smoke, or gas leaks.
[00103] Once the input data is received, the analyzing step (1004) is performed by an analysis unit (104), which could be a server, cloud-based processing unit, or an edge computing device. The analysis unit processes the disaster data input and the location tracking data input to determine the disaster-affected area and the location of at least one user inside the building. Based on this analysis, the system generates data output comprising visual data and route guidance data. This data includes a 3D digital twin of the building, displaying 3D models and 2D schematics of floors, escape routes, and firefighting equipment locations. The 3D digital twin is dynamically updated in real-time to reflect any changes in the fire, smoke, or gas leak conditions, ensuring that both occupants and rescue personnel receive the most up-to-date evacuation guidance.
[00104] Following the analysis, the indicating step (1006) ensures that the output device (106) presents the visual data and route guidance data to the user device for assisting with escape route identification. Additionally, the output device provides this information to one or more remote devices associated with firefighters or rescue personnel to guide them in selecting the safest and most efficient entry routes for disaster mitigation. The output device (106) may include smartphones, augmented reality (AR) headsets, tablets, LED screens, wearable smart glasses, or control room monitors, ensuring that users receive information in a format best suited for their situation.
[00105] The method further incorporates QR codes (1108) positioned at strategic locations, including fire exits, staircases, and lift areas, to provide web-based access to real-time escape route guidance. A user inside the building can scan a QR code using their smartphone or tablet to immediately access a web application displaying the nearest escape routes. The analysis unit ensures that the escape routes are continuously updated in real-time based on the evolving disaster situation. Additionally, the system utilizes an artificial intelligence (AI) module to analyze fire spread patterns and recommend optimal evacuation and firefighting strategies, further enhancing the efficiency of the rescue operation.
[00106] For seamless communication, the system components utilize MQTT (Message Queuing Telemetry Transport), HTTP (Hypertext Transfer Protocol), or JSON (JavaScript Object Notation) protocols to exchange data efficiently between the input device (102), IoT microcontroller, and the network of sensors. The use of MQTT ensures lightweight, real-time communication, while HTTP and JSON enable structured data transmission to facilitate information sharing between the analysis unit (104), output devices (106), and remote rescue teams.
[00107] Real-World Example of Fire Occurrence in a Building: Consider a fire breaking out in the electrical room of a commercial office building due to an overloaded circuit. The infrared flame sensor and smoke detector (1106-1, 1106-2) detect the presence of fire and smoke, while the gas (ppm) sensor (1106-N) identifies an increase in hazardous gases. These sensors transmit the data to an IoT microcontroller (1104), such as an ESP32, which processes the readings and forwards them to the analysis unit (104), hosted on a cloud-based server.
[00108] Upon receiving this data, the analysis unit (104) identifies the electrical room as the affected area and determines that several employees on the third floor are at risk. It immediately updates the 3D digital twin of the building, highlighting the affected zone in red and dynamically adjusting evacuation routes based on fire progression. The analysis unit also alerts occupants via their smartphones (102), displaying the safest escape route away from the fire. Employees on the third floor receive instructions directing them toward the nearest fire exit via a fire-rated staircase, while employees on lower floors are guided towards other designated safe exits.
[00109] Meanwhile, firefighters approaching the building use a QR code (1108) installed at the main entrance to access a live web-based visualization of the building’s layout. The system provides entry guidance by highlighting accessible pathways free from fire and smoke, allowing firefighters to quickly navigate to the source of the fire. Since the fire is near the building’s main power control panel, the AI module of the analysis unit (104) predicts potential fire spread and issues a warning to rescue teams, suggesting proactive firefighting strategies to prevent escalation. The firefighters, using AR headsets (106), navigate through the smoke-filled hallways with real-time guidance from the 3D digital twin and successfully extinguish the fire before it spreads further.
[00110] The method (1000) for rescue management offers a technologically advanced, real-time approach to fire emergency response, ensuring efficient detection, evacuation guidance, and rescue operations. By integrating IoT sensors, real-time data analysis, AI-powered decision-making, and 3D digital twin visualization, the system significantly enhances fire safety measures in commercial and residential buildings. This method provides clear escape routes for occupants, real-time entry guidance for rescue teams, and dynamic updates based on disaster progression, making it an effective and life-saving solution in emergency situations.
[00111] As will be appreciated by one skilled in the art, the methods and systems may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the methods and systems may take the form of a computer program product on a computer-readable storage medium having computer-readable program instructions (e.g., computer software) embodied in the storage medium. More particularly, the present methods and systems may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including hard disks, CD-ROMs, optical storage devices, or magnetic storage devices.
[00112] FIG. 11 illustrates an exemplary block diagram of a system for rescue management, in accordance with a preferred embodiment of the invention.
[00113] In an embodiment, the rescue management system (100) provides real-time disaster monitoring, analysis, and guidance to both occupants and rescue personnel during emergencies like fires, smoke, or gas leaks inside a building. Below is a structured technical breakdown of the system's functionality, implementation details, and a real-world example of how it works during a fire emergency in a commercial building.
[00114] Step 1: Data Collection and Input (102, 1102, 1104, 1106-1 to 1106-N): The system begins by collecting real-time disaster data and user location tracking data through at least one input device (102). The input device (1102), such as a smartphone, tablet, wearable device, or a fixed control panel, receives data from an IoT microcontroller (1104), which continuously monitors the network of sensors (1106-1, 1106-2, …., 1106-N) installed throughout the building.
[00115] Implementation Details: The IoT microcontroller (1104) can be selected from ESP8266, ESP32, Raspberry Pi, Arduino, or any Long Range (LoRa)/4G/5G-enabled microcontroller. The network of sensors (1106-1, 1106-2, …., 1106-N) includes flame sensors, smoke detectors, gas (ppm) sensors, temperature sensors, humidity sensors, and air quality sensors to detect fire, smoke, or gas leaks in real time. The input device (102) transmits this data to the analysis unit (104) for further processing.
[00116] Step 2: Analysis and Data Processing (104): The analysis unit (104), which can be implemented as a server, cloud-based processing unit, or an edge computing device, receives the disaster data and the location tracking data of users from the input device (102). It then determines the affected areas and identifies the locations of building occupants to generate real-time rescue strategies.
[00117] Implementation Details: The analysis unit (104) continuously updates the 3D digital twin—a virtual representation of the building with 3D models, 2D schematics, escape routes, and firefighting equipment locations. The data is updated dynamically, meaning if the fire spreads, the evacuation routes adjust accordingly. The system uses AI to analyze fire spread patterns and recommend evacuation routes and firefighting strategies.
[00118] Step 3: Communication Between Components (Claim 2 - MQTT, HTTP, JSON): To ensure efficient and rapid data transmission, the system uses MQTT (Message Queuing Telemetry Transport), HTTP (Hypertext Transfer Protocol), or JSON (JavaScript Object Notation) protocols.
[00119] Implementation Details: MQTT ensures lightweight and real-time data communication between sensors, IoT microcontrollers, and the analysis unit. HTTP/JSON protocols are used for transmitting visual data and escape routes to user devices and rescue personnel.
[00120] Step 4: Escape Route and Entry Route Guidance (106, 1108, Remote Devices): The output device (106) displays the real-time escape route guidance for building occupants and entry route guidance for rescue personnel.
[00121] Implementation Details: Output devices (106) can be smartphones, AR headsets, tablets, LED screens, wearable smart glasses, or control room monitors. QR codes (1108) installed at multiple strategic locations (fire exits, staircases, lift areas) allow users to scan and access real-time evacuation guidance. Remote devices used by firefighters and rescue teams receive entry route guidance based on the fire location and safe access points.
[00122] Step 5: Dynamic Updates and AI-Based Decision-Making: The system dynamically updates data in real-time as the disaster progresses, using AI to predict fire spread patterns and adjust evacuation plans accordingly.
[00123] Implementation Details: AI analyzes real-time sensor inputs, temperature variations, and smoke movement to determine fire spread predictions. AI adjusts the recommended escape routes and firefighter entry points dynamically.
[00124] In a real-world working of example: A fire breaks out in an office building’s server room due to an electrical short circuit. The flame sensor (1106-1) detects high temperatures, and the smoke sensor (1106-2) identifies smoke accumulation. The IoT microcontroller (1104) processes these signals and transmits the disaster data to the analysis unit (104). The analysis unit (104) identifies the fire in the server room, highlights the affected floor on the 3D digital twin, and marks safe evacuation routes based on the current fire spread pattern. It also identifies that employees on the third floor are in potential danger and need evacuation guidance immediately. Once the fire alert is detected, the analysis unit transmits real-time evacuation instructions using MQTT/JSON protocols to all smartphones and tablets (1102) of the building occupants. Simultaneously, firefighters are alerted through their control center. Building occupants receive evacuation instructions on their smartphones, directing them to the nearest fire staircase.
[00125] Firefighters outside the building scan a QR code (1108) placed at the building entrance, allowing them to view a real-time digital twin of the affected floors and select the safest entry route. A firefighter on the first floor uses an AR headset to navigate smoke-filled hallways, guided by the 3D digital twin visualization of safe pathways.
[00126] As the fire spreads from the server room to the hallway, the system reroutes occupants on the third floor to an alternative staircase with clear visibility.
[00127] Firefighters receive AI-generated alerts, predicting that the fire will reach the main power supply in 5 minutes, allowing them to take preventive action.
[00128] The system (100) for rescue management provides a highly efficient, real-time disaster detection and evacuation system by integrating IoT sensors, AI, a 3D digital twin, and real-time communication protocols. It ensures that building occupants evacuate safely, while rescue personnel receive accurate, up-to-date entry route guidance to mitigate the disaster.
[00129] FIG. 12 illustrates an exemplary architecture of the rescue management system, in accordance with a preferred embodiment of the invention. It demonstrating how sensor-based disaster data and location tracking inputs are received, processed, and translated into meaningful outputs for both occupants and rescue personnel. The input section includes a set of sensors such as smoke sensors, flame sensors, gas sensors, and a QR code scanner, which corresponds to the network of sensors and user devices defined in Claim 1. These inputs are routed to an analysis unit (104), where data is processed using AI algorithms and MQTT communication protocol, as outlined in Claims 2 and 6. The analysis unit determines the disaster zone and user location, subsequently generating visual and route guidance data. This output is distributed to devices such as smartphones, AR headsets, and LED monitors, satisfying Claims 1, 5, and 7, which define the scope of output devices and the nature of the data they display for evacuation and rescue.
[00130] More specifically, FIG. 12 illustrates a block diagram of the overall architecture of the rescue management system (100). The system comprises a plurality of input devices (102), including a smoke sensor, flame sensor, gas sensor, and a QR code scanner, configured to detect fire, smoke, gas leaks, and location-based user inputs within a building. These input signals are communicated to an analysis unit (104) via protocols such as MQTT, HTTP, or JSON. The analysis unit (104), which may comprise an artificial intelligence (AI) module, processes the received sensor and location data to determine hazard zones and user locations. The processed information is converted into visual and route guidance data, which is then transmitted to one or more output devices (106), such as smartphones, AR headsets, or LED monitors. These output devices display guidance to occupants for evacuation and to remote rescue personnel for coordinated entry.
[00131] FIG. 13 illustrates an exemplary a simplified communication architecture, in accordance with a preferred embodiment of the invention. It illustrates a simplified communication architecture that supports the real-time transmission of sensor data from the network of sensors to the analysis unit. A sensor node is connected to an IoT microcontroller (ESP8266), which communicates with the analysis unit using MQTT, a lightweight messaging protocol ideal for emergency IoT applications. This figure directly supports Claim 2 and Claim 10, which specify the use of MQTT and the IoT hardware stack for transmitting disaster-related data such as fire, smoke, and gas detection metrics. The presence of ESP8266 confirms compliance with Claim 7, which lists compatible microcontrollers for data acquisition and communication.
[00132] More specifically, FIG. 13 illustrates a communication flow for transmitting disaster data input to the analysis unit (104). Sensor nodes, including fire, smoke, or gas sensors, interface with an IoT microcontroller, such as an ESP8266, which is programmed to forward the sensor data using the MQTT protocol. The data is then received by the analysis unit (104), enabling real-time detection and response. This figure demonstrates the use of lightweight and power-efficient IoT communication pathways, in line with claims that specify the use of ESP8266 and MQTT for inter-device communication within the disaster management framework.
[00133] FIG. 14 illustrates an exemplary location tracking mechanism within the system, in accordance with a preferred embodiment of the invention. It focuses on the location tracking mechanism within the system, where both occupant devices and rescue personnel devices interact with QR codes placed throughout the building. These QR scans feed into the location tracking module, which is crucial for determining the precise real-time positions of users inside the building. This module integrates directly with the analysis unit to facilitate the generation of location-sensitive escape or entry guidance. The configuration aligns with Claim 3, which highlights the strategic deployment of QR codes at locations such as staircases and fire exits, and Claim 4, which includes rescue personnel as target users. Furthermore, it supports the overall objective of Claim 1 by enabling accurate user localization during emergencies.
[00134] More specifically, FIG. 14 illustrates presents a system for location tracking utilizing QR code-enabled user devices. Occupant devices and rescue personnel devices are configured to scan QR codes positioned throughout the building at strategic locations such as exits, staircases, or lifts. The location information obtained via QR scans is transmitted to a central location tracking unit, which integrates this data with sensor inputs. The resultant tracking data is analyzed by the analysis unit (104) to support real-time decision-making. This figure aligns with the claims directed to user-specific guidance based on QR code scans and dynamic location tracking.
[00135] FIG. 15 illustrates how the analysis unit (104) transmits route guidance and disaster alerts to various user interfaces, including smartphones, AR headsets, and LED displays, in accordance with a preferred embodiment of the invention. It shows how the analysis unit (104) transmits route guidance and disaster alerts to various user interfaces, including smartphones, AR headsets, and LED displays. These outputs are context-aware and tailored to the location of the user, fulfilling the system’s role in real-time evacuation support and firefighting strategy dissemination. This figure reinforces Claim 1 in terms of visual output and supports Claim 5 through its demonstration of dynamic and continuous updates. It also echoes Claim 7, which defines the categories of devices that can serve as output interfaces for the system.
[00136] More specifically, FIG. 15 illustrates the output generation from the analysis unit (104), wherein route guidance and alert data are distributed to multiple output devices. The analysis unit determines optimal escape paths or rescue entry points and renders them on smartphones, AR headsets, and LED screens. These visual outputs assist both occupants and rescuers in navigating the environment during a disaster event. The guidance provided is dynamic and updated in real time in response to changing environmental conditions detected by the sensor network.
[00137] FIG. 16A illustrates an exemplary visual guidance system generated by the analysis unit, in accordance with a preferred embodiment of the invention. It elaborates the visual guidance system generated by the analysis unit, focusing on the digital representation of the affected building. It shows fire zone highlights, interior floor navigation, fire locations, and escape route arrows, collectively composing the virtual 2D/3D model of the structure. This matches the digital twin architecture referenced in Claim 1, where the system includes schematics and visual cues for safe evacuation. Additionally, this figure satisfies Claim 8, which requires generation of visual data and route guidance and its presentation to users during emergencies.
[00138] More specifically, FIG. 16A represents components of the visual data output generated by the analysis unit (104). This includes highlighting of fire zones within the building, fire location data, and escape route arrows rendered on 2D or 3D floor schematics. The visualization assists users in recognizing hazardous areas and finding viable evacuation paths. The figure exemplifies the system's ability to generate a virtual representation of the building (i.e., digital twin), fulfilling claims concerning 3D/2D mapping and route guidance visualization.
[00139] FIG. 16B illustrates an exemplary he earlier visualizations by overlaying firefighting resources (extinguishers, hose reels) and real-time markers for occupants, rescuers, and hazard zones on a 2D/3D floorplan, in accordance with a preferred embodiment of the invention. It complements the earlier visualizations by overlaying firefighting resources (extinguishers, hose reels) and real-time markers for occupants, rescuers, and hazard zones on a 2D/3D floorplan. This granular detail assists both evacuees and firefighters by indicating the nearest rescue resources and possible hazards. The depiction directly fulfills Claim 1 in terms of route and firefighting equipment visualization. It further supports Claim 4, highlighting tools available to remote rescue personnel, and Claim 6, through its reliance on AI for hazard zone prediction and route refinement based on evolving sensor data.
[00140] More specifically, FIG. 16B represents illustrates further elements of the virtual environment and floorplan overlays. A 2D floorplan is enhanced with firefighting equipment icons such as fire extinguishers and hose reels. Another floorplan visualization displays occupant markers, rescuer paths, and hazard zones, allowing for a clear situational overview. This supports the claims that require visual augmentation for both evacuees and responders, using dynamic icons and positional tracking overlays.
[00141] FIG. 17 illustrates an exemplary a stepwise flow of the method, in accordance with a preferred embodiment of the invention. It outlines a stepwise flow of the method described in Claims 8, 9, and 10. It begins with sensor and location data collection, proceeds to data transmission to the analysis unit, then to hazard and occupant identification, followed by route generation. It shows distribution to output devices, and finally, AI-based prediction and dynamic updating. This structured approach encapsulates the method’s compliance with the patent’s functional requirements, including real-time updates, visual data generation, and adaptive escape routing, thereby fulfilling all elements of method claims and establishing interoperability between hardware and software components.
[00142] More specifically, FIG. 17 represents presents a method flowchart detailing the steps of rescue management (1000). The method initiates with sensor and location data collection, followed by transmission to the analysis unit. The analysis unit processes the data to identify hazards and user locations, and generates visual and route guidance information. The output data is distributed to various display devices, and an AI prediction module enables dynamic updates. This process culminates in updated and context-sensitive guidance being rendered to users in real time. This figure directly supports method claims outlining end-to-end functionality.
[00143] FIG. 18 illustrates an exemplary flow diagram that shows how the system responds when a fire-related event is detected, in accordance with a preferred embodiment of the invention. It provides a comprehensive flow diagram that shows how the system responds when a fire-related event is detected. It begins with sensor activation, MQTT transmission, and data parsing at the analysis unit. Depending on the user type—occupant or rescuer—the system loads the corresponding view: either a 3D escape route with arrows and lighting or a top-down building layout with hazard overlays. This fulfills the multi-role support of Claim 4 and the AI-driven customization of Claim 6. The diagram also reinforces the role of QR-based access (Claim 3) and real-time updates (Claim 5). Additional overlays such as directional labels (North, South, etc.) and zoom zones support visual clarity in high-stress scenarios.
[00144] More specifically, FIG. 17 provides a comprehensive system flow, beginning with data acquisition from fire sensors transmitted via MQTT to a broker. Upon scanning of QR codes, users are identified as either occupants or rescuers. The system subsequently loads customized views based on the user type. For occupants, top-down or 3D floor views are generated with highlighted escape routes, arrows, and animated lighting. For rescuers, full-building overlays with fire locations, directional labels, and access paths are rendered. The figure incorporates real-time sensor updates, fire zone mapping, and user-specific guidance, thereby implementing several claims regarding AI-driven, role-specific, and dynamically updated rescue strategies.
,CLAIMS:CLAIMS:
WE CLAIM:
1. A system (100) for rescue management comprising:
at least one input device (102) configured to receive disaster data input and location tracking data input from a user device (1102) associated with a user, the disaster data input is received from an Internet of Things (IoT) microcontroller (1104) for detecting fire, smoke, or gas leaks at a location inside a building in real time based on the data received from a network of sensors (1106-1, 1106-2, …., 1106-N) comprising flame sensors, smoke sensors, and gas (ppm) sensors;
at least one analysis unit (104) communicably coupled to the at least one input device (102), the at least one analysis unit (104) configured to:
analyze the disaster data input and the location tracking data input from the at least one input device (102) to determine a disaster affected area and location of the at least one user at least one user inside the building; and
generate data output comprising visual data and route guidance data, wherein the visual data and route guidance data comprises a virtual representation of the building (3D digital twin), including 3D models and 2D schematics of floors, escape routes, and firefighting equipment locations; and
at least one output device (106) communicably coupled to the at least one analysis unit (104), the at least one output device (106) is configured to indicate the visual data and route guidance data to the user device for identifying an escape route from the building and to one or more remote devices located at remote location to for identifying an entry route to mitigate the disaster.
2. The system (100) as claimed in claim 1, wherein the at least one input device (102), the IoT microcontroller, and the network of sensors communicate with each other through utilizing MQTT (Message Queuing Telemetry Transport), HTTP (Hypertext Transfer Protocol), or JSON (JavaScript Object Notation) protocols.
3. The system (100) as claimed in claim 1, wherein the at least one analysis unit (104) is communicably coupled to a web-based application, accessible via Quick Response (QR) codes (1108) installed within the building, providing the user with real-time escape route guidance based on their location and the detected disaster, and wherein the QR codes are positioned at multiple strategic locations, including fire exits, staircases, and lift areas.
4. The system (100) as claimed in claim 1, wherein the one or more remote devices are associated with one or more firefighters or one or more rescue personnel.
5. The system (100) as claimed in claim 1, wherein the data output is updated dynamically in response to the disaster data input in real-time.
6. The system (100) as claimed in claim 1, wherein the at least one analysis unit (104) comprises an artificial intelligence (AI) module configured to analyze fire spread patterns and recommend evacuation and firefighting strategies.
7. The system (100) as claimed in claim 1, wherein:
the at least one input device (102) is selected from a group consisting of a smartphone, tablet, wearable device, or a fixed control panel;
the at least one IoT microcontroller is selected from a group consisting of extremely small peripheral (ESP)8266, ESP32, Raspberry Pi, Arduino, or any Long Range (LoRa)/4G/5G-enabled microcontroller;
the network of sensors is selected from a group consisting of infrared flame sensors, smoke detectors, gas (ppm) sensors, temperature sensors, humidity sensors, and air quality sensors;
the at least one analysis unit (104) is implemented as a server, cloud-based processing unit, or an edge computing device; and
the at least one output device (106) is selected from a group consisting of smartphones, augmented reality (AR) headsets, tablets, LED screens, wearable smart glasses, or control room monitors.
8. A method (1000) for rescue management, the method comprising:
receiving (1002), by at least one input device (102), disaster data input and location tracking data input from a user device associated with a user, the disaster data input is received from an Internet of Things (IoT) microcontroller for detecting fire, smoke, or gas leaks at a location inside a building in real time based on the data received from a network of sensors comprising flame sensors, smoke sensors, and gas (ppm) sensors;
analyzing (1004), by at least one analysis unit (104), analyze the disaster data input and the location tracking data input from the at least one input device (102) to determine a disaster affected area and location of the at least one user at least one user inside the building, and generate data output comprising visual data and route guidance data, wherein the visual data and route guidance data comprises a virtual representation of the building (3D digital twin), including 3D models and 2D schematics of floors, escape routes, and firefighting equipment locations; and
indicating (1006), by at least one output device (106), the visual data and route guidance data to the user device for identifying an escape route from the building and to one or more remote devices located at remote location to for identifying an entry route to mitigate the disaster.
9. The method (100) as claimed in claim 8, wherein the method further comprises:
providing, through a web-based application, accessible via Quick Response (QR) codes installed within the building, the user with real-time escape route guidance based on their location and the detected disaster, and wherein the QR codes are positioned at multiple strategic locations, including fire exits, staircases, and lift area;
updating dynamically in response to the disaster data input in real-time; and
analyzing, an artificial intelligence (AI) module of the at least one analysis unit (104), fire spread patterns and recommend evacuation and firefighting strategies.
10. The method (100) as claimed in claim 8, wherein:
the at least one input device (102), the IoT microcontroller, and the network of sensors communicate with each other through utilizing MQTT (Message Queuing Telemetry Transport), HTTP (Hypertext Transfer Protocol), or JSON (JavaScript Object Notation) protocols;
the at least one analysis unit (104) is communicably coupled to a web-based application, accessible via Quick Response (QR) codes installed within the building, providing the user with real-time escape route guidance based on their location and the detected disaster, and wherein the QR codes are positioned at multiple strategic locations, including fire exits, staircases, and lift areas; and
the at least one input device (102) is selected from a group consisting of a smartphone, tablet, wearable device, or a fixed control panel;
the at least one IoT microcontroller is selected from a group consisting of extremely small peripheral (ESP)8266, ESP32, Raspberry Pi, Arduino, or any Long Range (LoRa)/4G/5G-enabled microcontroller;
the network of sensors is selected from a group consisting of infrared flame sensors, smoke detectors, gas (ppm) sensors, temperature sensors, humidity sensors, and air quality sensors;
the at least one analysis unit (104) is implemented as a server, cloud-based processing unit, or an edge computing device; and
the at least one output device (106) is selected from a group consisting of smartphones, augmented reality (AR) headsets, tablets, LED screens, wearable smart glasses, or control room monitors.
Dated this 01st Day of July 2025
Prasad Prabhakar Karhad
Agent for the Applicant
IN/PA-2352
| # | Name | Date |
|---|---|---|
| 1 | 202421054472-PROVISIONAL SPECIFICATION [17-07-2024(online)].pdf | 2024-07-17 |
| 2 | 202421054472-FORM FOR SMALL ENTITY(FORM-28) [17-07-2024(online)].pdf | 2024-07-17 |
| 3 | 202421054472-FORM FOR SMALL ENTITY [17-07-2024(online)].pdf | 2024-07-17 |
| 4 | 202421054472-FORM 1 [17-07-2024(online)].pdf | 2024-07-17 |
| 5 | 202421054472-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-07-2024(online)].pdf | 2024-07-17 |
| 6 | 202421054472-EVIDENCE FOR REGISTRATION UNDER SSI [17-07-2024(online)].pdf | 2024-07-17 |
| 7 | 202421054472-DRAWINGS [17-07-2024(online)].pdf | 2024-07-17 |
| 8 | 202421054472-Proof of Right [24-07-2024(online)].pdf | 2024-07-24 |
| 9 | 202421054472-FORM-26 [24-07-2024(online)].pdf | 2024-07-24 |
| 10 | 202421054472-FORM-5 [01-07-2025(online)].pdf | 2025-07-01 |
| 11 | 202421054472-DRAWING [01-07-2025(online)].pdf | 2025-07-01 |
| 12 | 202421054472-COMPLETE SPECIFICATION [01-07-2025(online)].pdf | 2025-07-01 |
| 13 | 202421054472-FORM-9 [03-07-2025(online)].pdf | 2025-07-03 |
| 14 | Abstract.jpg | 2025-07-18 |
| 15 | 202421054472-Covering Letter [04-08-2025(online)].pdf | 2025-08-04 |
| 16 | 202421054472-MSME CERTIFICATE [17-09-2025(online)].pdf | 2025-09-17 |
| 17 | 202421054472-FORM28 [17-09-2025(online)].pdf | 2025-09-17 |
| 18 | 202421054472-FORM 18A [17-09-2025(online)].pdf | 2025-09-17 |