Sign In to Follow Application
View All Documents & Correspondence

Method And System For Monitoring Of Turbid Media Of Interest To Predict Events

Abstract: State-of-the-art systems use active energy sources like lasers for capturing turbid media. Identification of turbid media of interest in presence of other turbid media is not addressed. Prediction of events, such as outbursts, in ROI is derived by relying only on current data of a turbid activity. Method and system for monitoring of turbid media of interest to predict events utilizes spatio-temporal features extracted from the images captured by visual cameras in fusion with features extracted from other sensors and cameras of different modalities to rightly identify turbid regions. A set of parameters defining parameters relevant to the turbid media of interest enables the system to rightly identify the turbid media of interest from plurality of turbid media present in ROI. The system accurately predicts potential outburst regions by utilizing historical data (time series data providing variations of temperature profile and temporal signatures of past events) along with current data. [To be published with FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 January 2020
Publication Number
32/2021
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-09-24
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai 400021 Maharashtra, India

Inventors

1. GUBBI LAKSHMINARASIMHA, Jayavardhana Rama
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA
2. SEEMAKURTHY, Karthik
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA
3. PURUSHOTHAMAN, Balamuralidhar
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA
4. RAJ, Ashutosh
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA
5. HARIHARAN ANAND, Vishnu
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA
6. KOTAMRAJU, Srinivas
Tata Consultancy Services Limited Kohinoor Park Plot No 1, Jubliee Garden Cyberabad 500001 Telangana, INDIA
7. RANGARAJAN, Mahesh
Tata Consultancy Services Limited Gopalan Global Axis, SEZ "H" Block, No. 152 (Sy No. 147,157 & 158), Hoody Village Bangalore 560066 Karnataka, INDIA

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
TITLE
METHOD AND SYSTEM FOR MONITORING OF TURBID MEDIA OF
INTEREST TO PREDICT EVENTS
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD [001] The embodiments herein generally relate to the field of harsh environment monitoring and alarming systems and, more particularly, to method and system for monitoring of turbid media of interest in the harsh environments to predict events such as possible underground geyser outbursts.
BACKGROUND [002] Safety of workers working in harsh environments such as coal mines, oil refineries, underwater sea exploration for energy sources and the like is critical. To ensure safety, minimize damage and predict any undesired activity or outbursts due to scattering media or turbid media such as steam flow, gas flow, which are present in the harsh environment, the turbid media needs to be monitored closely and intensely. Manual monitoring is highly risky and hence not practical. In the automatic approach, state-of-the-art systems use active energy sources like lasers for imaging and mostly operate in controlled environments. Typically, the active energy sources like lasers are more suitable for short range of distances and are difficult to adapt to scenarios where scattering media or the turbid media is monitored from far away distance (in the order of kilometers). Attempts to utilize passive sensors such as cameras that utilize energy reflected from ambient light, has its own challenges. In the scenarios where the primary detectors (cameras) are placed very far away from the area being monitored, the scattered energy from the turbid media is influenced by the ambient environment inducing noise in the information captured by various detectors. Further, in situations wherein multiple different scattering or turbid media exist together detecting turbid media of interest from images captured by the camera sensors is a challenge. For example, while monitoring steam regions in a coal mine, which is an outdoor environment, other turbid media like haze, fog, dust may possibly exist, which poses a challenge to rightly identify the turbid media of interest. These challenges in uncontrolled environments makes it difficult to utilize passive sensors such as cameras to extract accurate relevant information associated with the turbid media of interest to accurately predict undesired activity or outbursts in the harsh environment.

SUMMARY [003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for monitoring of turbid media of interest to predict outbursts is provided. The method comprises receiving input data comprising camera data and sensor data from a multimodal sensing unit providing long distance surveillance, wherein the input data provides information of one or more turbid media present in a Region of Interest (ROI), and wherein the multimodal sensing unit comprises: a plurality of multimodal cameras, comprising a combination of portable and fixed visual cameras and thermal cameras, covering a plurality of subregions within the ROI, and positioned to capture and stream the camera data comprising visual image frames and thermal image frames, wherein the camera data provides multiple views of the plurality of subregions comprising local views and global views; and a plurality of multimodal sensors positioned to capture the sensor data comprising a plurality of parameters associated with the plurality of subregions. Further, the method comprises preprocessing the camera data and the sensor data to eliminate noise, wherein the camera data is preprocessed using a video stabilization approach to eliminate noise induced in the camera data due to undesired motion of the plurality of multimodal cameras. Further, the method comprises deriving a flow vector information from the visual image frames or thermal image frames to extract temporal features, of turbid regions of the one or more turbid media within every image frame, from the flow vector information. Further, the method comprises processing the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions corresponding to a turbid media of interest among the one or more turbid media present in the ROI, wherein the plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest. Furthermore, the method comprises clustering the plurality of turbid regions into a plurality of clusters based on the temporal features

and a thermal profile obtained from the thermal image frames, wherein each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions. Furthermore, the method comprises fitting a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each turbid region, wherein each minimum point is identified by corresponding location coordinates. Further, the method comprises grouping the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest, wherein each source location among the plurality of source locations corresponds to one or more minimum points which have mapping location coordinates. Further, the method comprises monitoring temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of the ROI obtained from the thermal cameras. Further, the method comprises predicting one or more source locations among the plurality of source locations as target locations for occurrence of events based on a prediction model, wherein the prediction model utilizes a clustered historical data the plurality of source locations, the detected temporal variations and the thermal profile of each frame for prediction.
[004] In another aspect, a system for monitoring of turbid media of interest to predict outbursts is provided. The system comprises a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to receive input data comprising camera data and sensor data from a multimodal sensing unit providing long distance surveillance, wherein the input data provides information of one or more turbid media present in a Region of Interest (ROI), and wherein the multimodal sensing unit comprises: a plurality of multimodal cameras, comprising a combination of portable and fixed visual cameras and thermal cameras with pan-tilt-zoom feature, covering a plurality of subregions within the ROI, and positioned

to capture and stream the camera data comprising visual images and thermal image frames, wherein the camera data provides multiple views of the plurality of subregions comprising local views and global views; and a plurality of multimodal sensors positioned to capture the sensor data comprising a plurality of parameters associated with the plurality of subregions. Further, the one or more hardware processors are configured to preprocess the camera data and the sensor data to eliminate noise, wherein the camera data is preprocessed using a video stabilization approach to eliminate noise induced in the camera data due to undesired motion of the plurality of multimodal cameras. Further, the one or more hardware processors are configured to derive a flow vector information from the visual images to extract temporal features, of turbid regions of the one or more turbid media within every image frame, from the flow vector information. Further, the one or more hardware processors are configured to process the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions corresponding to a turbid media of interest among the one or more turbid media present in the ROI, wherein the plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest. Furthermore, the one or more hardware processors are configured to cluster the plurality of turbid regions into a plurality of clusters based on the temporal features and a thermal profile obtained from the thermal image frames , wherein each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions. Furthermore, the one or more hardware processors are configured to fit a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each turbid region, wherein each minimum point is identified by corresponding location coordinates. Furthermore, the one or more hardware processors are configured to group the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest, wherein each source location among the plurality of source locations corresponds to one or more minimum points which have mapping location coordinates. Further, the one or more

hardware processors are configured to monitor temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of the ROI obtained from the thermal cameras. Further, the one or more hardware processors are configured to predict one or more source locations among the plurality of source locations as target locations for occurrence of events based on a prediction model, wherein the prediction model utilizes a clustered historical data the plurality of source locations, the detected temporal variations and the thermal profile of each frame for prediction.
[005] In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors causes a method for monitoring of turbid media of interest to predict outbursts. The method comprises receiving input data comprising camera data and sensor data from a multimodal sensing unit providing long distance surveillance, wherein the input data provides information of one or more turbid media present in a Region of Interest (ROI), and wherein the multimodal sensing unit comprises: a plurality of multimodal cameras, comprising a combination of portable and fixed visual cameras and thermal cameras, covering a plurality of subregions within the ROI, and positioned to capture and stream the camera data comprising visual images and thermal image frames, wherein the camera data provides multiple views of the plurality of subregions comprising local views and global views; and a plurality of multimodal sensors positioned to capture the sensor data comprising a plurality of parameters associated with the plurality of subregions. Further, the method comprises preprocessing the camera data and the sensor data to eliminate noise, wherein the camera data is preprocessed using a video stabilization approach to eliminate noise induced in the camera data due to undesired motion of the plurality of multimodal cameras. Further, the method comprises deriving a flow vector information from the visual image frames and the thermal image frames to extract temporal features, of turbid regions of the one or more turbid media within every

image frame, from the flow vector information. Further, the method comprises processing the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions corresponding to a turbid media of interest among the one or more turbid media present in the ROI, wherein the plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest. Furthermore, the method comprises clustering the plurality of turbid regions into a plurality of clusters based on the temporal features and a thermal profile obtained from the thermal image frames, wherein each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions. Furthermore, the method comprises fitting a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each turbid region, wherein each minimum point is identified by corresponding location coordinates. Further, the method comprises grouping the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest, wherein each source location among the plurality of source locations corresponds to one or more minimum points which have mapping location coordinates. Further, the method comprises monitoring temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of the ROI obtained from the thermal cameras. Further, the method comprises predicting one or more source locations among the plurality of source locations as target locations for occurrence of events based on a prediction model, wherein the prediction model utilizes a clustered historical data the plurality of source locations, the detected temporal variations and the thermal profile of each frame for prediction.
[006] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS
[007] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[008] FIG. 1 is a functional block diagram of a system for monitoring of a turbid media of interest in a Region of Interest (ROI) to predict events , in accordance with some embodiments of the present disclosure.
[009] FIG. 2A and FIG. 2B is a flow diagram illustrating a method for monitoring of the turbid media of interest in the ROI to predict the events, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0010] FIG. 3 illustrates an example ROI of a harsh environment, wherein the system of FIG 1 is deployed for monitoring of the turbid media of interest to predict the events, in accordance with some embodiments of the present disclosure.
[0011] FIG. 4 is a functional block diagram of the system of FIG. 1 depicting prediction model providing pattern mining for predictive analytics for event prediction in the ROI, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [0012] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[0013] Embodiments herein provide a method and system for monitoring of turbid media of interest to predict events in a Region of Interest (ROI) of a harsh environment. The events refer to sudden or abrupt changes in environment such as

outbursts. The system disclosed herein utilizes a multimodal sensing unit for monitoring the turbid media of interest in the ROI. The multimodal sensing unit is a combination of a plurality of multimodal cameras such as visual camera and thermal cameras, along with multimodal sensors such as pressure sensors, doppler radars and the like. The visual cameras, the thermal cameras, the doppler radars and so on are positioned to provide long distance surveillance, while the sensors such as pressure sensors and the like are positioned across the ROI to provide proximity surveillance to capture various parameters. The visual cameras provide passive sensing, utilizing reflected ambient light to capture the dynamic features of a one or more turbid media present in the ROI. The thermal cameras provide passive sensing to capture thermal signature of the ROI. The system provides mechanism to effectively compensate for noise induced camera data and sensor data capturing information of the turbid media. The method and system disclosed herein utilizes the spatio-temporal features extracted from the videos captured by visual cameras in fusion with features extracted from other sensors and cameras of different modalities to rightly identify turbid regions. A set of parameters defining parameters relevant to the turbid media of interest enables the system to rightly identify the turbid media of interest from plurality of turbid media captured by the visual cameras from the ROI. This eliminates possibility of false negatives. Unlike existing systems, which use time series data corresponding to the dynamic properties of the scattering or turbid media captured during the specific time duration alone, the system disclosed utilizes historical data along with current data from multiple sensors, to more accurately predict potential event regions well in advance to raise alerts.
[0014] Referring now to the drawings, and more particularly to FIGS. 1 through 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.

[0015] FIG. 1 is a functional block diagram of a system for monitoring of a turbid media of interest to predict events, in accordance with some embodiments of the present disclosure.
[0016] In an embodiment, the system 100 includes a processor(s) 104, communication interface device(s), alternatively referred as input/output (I/O) interface(s) 106, and one or more data storage devices or a memory 102 operatively coupled to the processor(s) 104. The system 100 with one or more hardware processors is configured to execute functions of one or more functional blocks of the system 100. The system further includes a multimodal sensing unit 110 comprising a plurality of multimodal cameras such as visual cameras and thermal cameras for long distance surveillance. The multimodal sensing unit 110 further comprises multimodal sensors such as doppler radars or the like for long distance surveillance and pressure sensors and the like for close surveillance. The system 100 utilizes spatio-temporal features extracted from the images captured by visual cameras of the multimodal cameras in fusion with features extracted from other sensors and cameras of different modalities to rightly identify turbid regions in the ROI.
[0017] Referring to the components of system 100, in an embodiment, the processor(s) 104, can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 is configured to fetch and execute computer-readable instructions stored in the memory 102. In an embodiment, the system 100 can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, workstations, mainframe computers, servers, a network cloud and the like.
[0018] The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, a touch user interface (TUI) and the like and can facilitate multiple communications within

a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting a number of devices (nodes) of the system 100 to one another or to another server. Further the I?0 interface 106 provides interface for interfacing the multimodal cameras, doppler radars, and sensors of the multimodal sensing unit 110 to the system 100.
[0019] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. Further, the memory 102 may include a database 108, which may store the current and historical camera data, sensor data, any meta data and the like. In an embodiment, the database 108 may be external to the system 100 (not shown) and coupled to the system via the I/O interface 106. Further, the memory 102 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. Functions of the components of system 100 are explained in conjunction with flow diagram of FIGS. 2A and 2B, and FIG. 3 depicting an example system 100 deployed in an example harsh environment such as a coal mining region, to monitor geyser activities or steam flows (turbid media of interest) and predict possible events such as geyser outburst, with sudden heavy and high speed steam flows, which are risky events for people working in the ROI.
[0020] FIG. 2A and FIG. 2B is a flow diagram illustrating a method for monitoring of the turbid media of interest in the ROI to predict events, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0021] In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 200 by the processor(s) or one or more hardware processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to the components or

blocks of the system 100 as depicted in FIG. 1, the steps of flow diagram as depicted in FIG. 2A through FIG. 2B, the example system of FIG. 3 and functional block of the system 100 for predictive analytics as in FIG. 4. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[0022] Referring to the steps of the method 200, at step 202, the one or more hardware processors 104 are configured to receive input data comprising camera data and sensor data from the multimodal sensing unit 110. The input data provides information of one or more turbid media present in a Region of Interest (ROI), which represents a harsh environment. The harsh environment may for example a coal mining region, oil refineries, areas of underwater sea exploration for energy sources and the like.
[0023] An illustrative example ROI in FIG. 3, depicts the coal mining region. The ROI, spread across several kilometers, is shown to have steam flows rising above the ground at multiple locations and presence of workers within the harsh environment. The presence of workers is implicitly depicted by movement of load carrying vehicles. A control unit at the periphery of the ROI includes the system 100 with the associated multimodal sensing unit 110. Unlike existing methods that face a challenge in utilizing passive sensors such as cameras in long distance surveillance of harsh environments, in the system 100 disclosed herein, the entire ROI is monitored by the system 100 via the multimodal sensing unit 110 comprising a plurality of multimodal cameras, which include a combination of portable and moving visual cameras and thermal cameras. In the example depicted in FIG. 3, the visual cameras include fixed cameras along the periphery of the ROI (such as visual cameras VC1 and VC2, and thermal cameras TC1 and TC2) providing long distance surveillance. Further, depicted are vehicle mounted portable cameras such as visual camera VC3 and thermal camera TC3). As can be

understood any appropriate number of multimodal cameras may be used to cover the entire ROI with corresponding field of view of the cameras covering a plurality of subregions within the ROI. Thus, the multimodal cameras are positioned to capture and stream the camera data comprising visual image frames and thermal image frames. The camera data captured by the multimodal cameras provides multiple views of the plurality of subregions comprising local views and global views. Further, the multimodal sensing unit 110 includes a plurality of multimodal sensors positioned to capture the sensor data comprising a plurality of parameters associated with the plurality of subregions. In example of FIG. 3, the sensors include doppler radio detection and ranging (radars) such as DR1 and DR2 positioned along the periphery of the ROI to provide long distance surveillance. Doppler radars, which can operate for longer distances, essentially provide the distance information of any location of interest within the ROI. For example, a source location of steam flow identified by the system 100, location of the moving vehicles mounted with the cameras and the like. Further, additional sensors such as pressure sensors may be placed across the ROI, which enables the system to capture data providing pressure at various locations across the ROI. Further, sensors such as Geo phones (installed deep inside earth) to measure the seismic response of earth and Light Detection and Ranging (LIDAR) for sensing the 3D information of the scene (sub-region of the ROI) under surveillance can be deployed. Light Detection and Ranging (LIDAR) and Radio Detection and Ranging (RADAR) differ in the signals they use for sensing the objects under surveillance. Such combinations ensure gathering maximum possible data of the ROI from various perspectives to improve the accuracy. Also, sensors used by the system 100 may include using Geographic Information System (GIS) and/or Global Positioning System (GPS) data from satellite for identifying the exact location of the outbursts.
[0024] At step 204, the one or more hardware processors 104 are configured to preprocessing the camera data and the sensor data to eliminate noise. The camera data is preprocessed using a video stabilization approach to eliminate noise induced in the camera data due to undesired motion of the plurality of multimodal cameras. Any motion captured by images of the camera data, which does not relate a motion

due to turbid region flow such as steam flow is termed as undesired motion. The undesired motion may be induced due to dynamic camera motion, say due to strong winds, dynamic object motion, and turbulent motion from the features corresponding to the events, such as outbursts. The motion induced by the cameras is global. Hence for the video stabilization approach used herein estimates the global motion between successive frames, this estimated motion can be compensated using known image processing techniques. On other hand, the motion induced due to turbulent atmosphere is local and the video stabilization approach utilizes non-rigid registration approach to compensate the local motion induced by the turbulent atmosphere. Similar to camera data, there is presence of noise in the sensor data as well. For example, for geo phones, which measure seismic vibrations, the sensed signal can be affected by the surface noise due to humans or vehicles moving around in the vicinity of the sensors. Hence, the system 100 removes the noise before the sensor data is integrated along with the visual camera features (camera data). As understood, noise elimination is critical to extract true or accurate data for processing by the system 100. Thus, any noise induced in the sensor data is eliminated, by using techniques specific to the sensors been used.
[0025] At step 206, the one or more hardware processors 104 are configured to derive a flow vector information from the visual image frames and the thermal image frames to extract temporal features, of the turbid regions of the one or more turbid media within every image frame. Deriving the flow vector information comprises steps of :
1. For every pixel position in image frames (the visual image frames
and thermal image frames), extract a patch around the pixel position
2. Search the subsequent image frames (successive frames of the video) streamed by the visual cameras to detect where similar patch, as identified in the initial frame exists.
3. Once the location of the patch in the next frame is identified, calculate the difference of the positions of this patch and this difference is defined as flow vector providing the flow vector information.

[0026] The temporal features derived from the flow vector information include average velocity, volume, area, frequency moments, opacity and the like, which are time varying features. In an embodiment, the flow vector information may be obtained by utilizing flow cameras, which further reduces computation related to extraction of flow vector information from visual or thermal camera data.
[0027] In practical scenarios, along with turbid media of interest such as the steam regions in the example of FIG. 3, other turbid media such as rising dust due to unloading of the vehicles (coal unloading), fog and the like may also be present. Thus, images of the rising dust or fog captured by the cameras, which have properties or features somewhat similar to the rising steam, it is critical to differentiate between the turbid media of interest and other turbid media before further processing the camera for event prediction and eliminate possibility of false alarms due to inaccurate prediction. Thus, at step 208, the one or more hardware processors 104 are configured to process the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions (such as steam regions of FIG. 3) corresponding to the turbid media of interest (steam flow) among the one or more turbid media present in the ROI. The plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest. The set of features refers to specific values of the temporal features such as the average velocity, the volume, area, the frequency moments, the opacity, specific to the turbid media of interest.
[0028] Consider a scenario, where steam moving upward from a geyser source comes across an obstacle such as wide light pole. The wide light pole divides the steam into two stream flows, also referred as steam regions that move upward but away from each other. This is depicted in FIG. 3 at source location 3. Thus, a single source leads to two steam flow groups as it moves upward. Similarly, due to wind conditions, team from source location 1, 2 or 3 may end up in multiple streams and may represent multiple groups. Thus, to address this scenario, at step 210, the one or more hardware processors 104 are configured to cluster the plurality of turbid regions into a plurality of clusters based on the temporal features and a thermal profile obtained from the thermal image frames. All the steam regions or steam flow

are clustered together based on this temporal signature, the thermal profile as well as distance between each of the steam flows or steam regions. All the extracted features can be used for clustering. Distance measures can be used as the criterion for clustering. Suppose say X and Y represent the set of features extracted for two identified flows then the likelihood of both of them belonging to the same cluster will be high if the distance of Y from X is less. Not only temporal features, other features like identified source locations and other sensors info like pressure, depth at which the identified turbid media is can be used as a features and proceeded for further clustering.
[0029] Each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions. As understood, different clusters are identified by the system 100 since a single source of steam can lead to multiple steam clusters. The clusters can be tracked separately over-time. However, the source of each of these clusters need to be tracked to identify which of the clusters are generated from the same origin or source. Once the source is known, the amount of steam and characteristics or features of the steam at every time instance can be aggregated to calculate the outburst (event). The aggregate steam and its characteristics are used for predicting future outbursts. Thus, once the clusters are identified, at step 212, the one or more hardware processors 104 are configured to fit a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each cluster. Each minimum point is identified by corresponding location coordinates. The disconnected steam regions are also combined using the clustering approach.
[0030] At step 214, the one or more hardware processors 104 are configured to group the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest. Each source location among the plurality of source locations corresponds to one or more minimum points, among the plurality minimum points, which have mapping location coordinates. As depicted in FIG. 3, two minimum points of two convex hull fittings to two separate cluster map on each other, indicating a single source location of steam flow. There

always exists a possibility that the steam flow(s) or the flow of turbid region at the source location may be discontinuous. Thus, at step 214, stabilizing of an unstable source location, among the plurality of source locations is performed using a moving window approach. The moving window approach used by the system 100 comprises the following steps:
[0031] Identify coordinates of the source locations of various steam flows or the steam regions.
[0032] Due to discontinuous nature of steam, the coordinates of the source locations may vary as the minimum of convex hull of each frame is identified as the source location.
[0033] After identifying the source of steam flows for a frame at time instant t, say, then the actual source location is computed as the weighted average of the source locations at time instant t and as well as source locations identified in the past of corresponding steam flows. The set of frames within a certain duration is termed as window of frames. Such a weighted average always stabilizes the steam flow source location.
[0034] At step 216, the one or more hardware processors 104 are configured to monitor and detect temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of ROI obtained from the thermal cameras.
[0035] At step 218, the one or more hardware processors 104 are configured to predict one or more source locations among the plurality of source locations as target locations for occurrence of events (for example, sudden outburst of geyser activity in the coal mining region), based on a prediction model. The prediction model utilizes a clustered historical data the plurality of source locations and the detected temporal variations for prediction. Thus, the prediction is based on not only features extracted from the current analyzed camera data and the sensor data but also based on a historical data. As understood, consideration of historical data adds to accuracy of prediction. For example, after identifying the various sources of steam flow, and if the past data of all those identified source locations is

available, then the prediction performed by the system 100 is much more efficient in predicting an outburst well before. The reason being the source locations which had outbursts in the past are more likely to experience the outburst again in the current scenario as well. The predictive analytics approach used by prediction model is explained in conjunction with FIG. 4. The clustered historical data is generated by clustering a historical data corresponding to the ROI. The historical data comprises time series data providing variations of temperature profile in past and temporal signatures of past events. Clustering of the historical data is high order clustering and comprises extracting the temporal signatures of past events from the time series data and clustering multiple turbid media in a single cluster based on the temporal signature of the past events and distance between each turbid regions.
[0036] If the prediction is classified as potential event then this block raises alarm. The source location (target location) of potential event is obtained by a calibrated camera set up along with range information, which is coming through data feed to map the pixel location of the event into the real world location accurately. Thus, the system can generate alarms to alert the people working on the ground in the ROI, where there are high probable chances of events such as possible geyser outburst.
[0037] FIG. 4 is a functional block diagram of the system of FIG. 1 depicting prediction model providing pattern mining for predictive analytics for event prediction in the ROI, in accordance with some embodiments of the present disclosure.
[0038] As depicted, an image extraction block of the system 100 receives CCTV feeds or the camera data received from visual cameras and the thermal cameras of the multimodal sensing unit 110 , providing video footage of the one or more turbid media present in the ROI. The image extraction block extracts frames from the received stream of camera data and converts the frames to the appropriate format required for further processing. For example, any format that is supported by opencv libraries for reading videos such as mat format or mp4 format or avi format may be used. The camera data provides a spatio-temporal data of the ROI, A day/night classification block further classifies each frame into

day frame or night frame based on light conditions identified in the frames. All frames classified as night frames are discarded as night and not be considered for further processing due to challenges in information extraction from these frames due to poor visibility. Further, a filtering block filters high quality images or frames from the classified frames and associates corresponding time stamps to the filtered images or frames. The time stamping provides the temporal information while images provide the spatial information of the turbid media in the ROI. Further, a metadata extraction block extracts other data associated with the images or frames such as size, resolution, day/night information and the like along with the time stamp. A metadata persistence block stores all the extracted metadata along in a database such as the database 108.
[0039] In parallel to, day/night classification, the extracted images or frames from the camera data are processed by a feature extraction block. This block extracts features, for example around 25 features from the images. These features are a combination of texture based gray level co-occurrence matrix (GLCM) features, frequency domain based feature, motion based feature and haze relevant features and the like. The extracted features are processed by a mapping block, which maps the extracted features with the timestamp, such as date time information at which the data was captured. This helps in forming a spatio-temporal relationship for the extracted features. Further, a qualification block performs qualification and shortlisting of relevant features for pattern mining, by feature reduction method such as Principal Component Analysis (PCA). There is always a chance that the features captured can have correlation with each other. Hence, using PCA the right features to be used for predictive analysis further can be identified. For example, if the feature Y is highly correlated with X then Y is not adding any extra information that X cannot represent about the turbid media of interest. Thus, removing the redundancy is beneficial for improving the speed of predictive analysis. These qualified relevant features can be visualized via a viewing block through time series graphs. The qualified features are stored in a time series database in the database 108, proving feature persistence. Further, pattern mining is performed on the qualified features. Herein, multiple reference

signal depicting specific pattern of the turbid media are selected. There reference signals are compared with the features extracted from the images to output the correlation strength. Based on this similarity index similar pattern from the images of the camera data are extracted. Similarity measurement is performed by find correlation between two time series signal. For example, LCSS (Lowest Common Subsequence) algorithm may be used for correlation. Each of the features extracted from the camera data is converted to a time series signal and is then compared with the corresponding feature from the reference data. A time series correlation measure signal is generated. A distance measure, such as Mahanalobis distance, combines the correlation results corresponding to each feature and is converted to a probability measuring the similarity between the reference data and the data extracted from the live feed. An automatic segregation/clustering block clusters the images using the similarity measure scores. The probability resulting from the similarity measure tells the strength of the similarity between the reference data and the data extracted from the live feed. The image steams (video clips) having high similarity are then extracted and saved to the database. Further, a pattern discovery block combines multiple video clips corresponding to a particular reference signal and classifies as a pattern. The information from these clips are then used for predictive analysis. A modelled template of behavior is matched with CCTV feed and similar patterns are unearthed.
[0040] Thus, the system and method disclosed here provides more accurate prediction of events using the historical data and the current data.
[0041] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0042] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message

therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0043] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0044] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent

to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0045] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0046] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A method for monitoring of a turbid media of interest to predict events, the method comprising
receiving (202), via one or more hardware processors, input data comprising camera data and sensor data from a multimodal sensing unit providing long distance surveillance, wherein the input data provides information of one or more turbid media present in a Region of Interest (ROI), and wherein the multimodal sensing unit comprises:
a plurality of multimodal cameras, comprising a combination
of portable and fixed visual cameras and thermal cameras,
covering a plurality of subregions within the ROI, and
positioned to capture and stream the camera data comprising
visual image frames and thermal image frames, wherein the
camera data provides multiple views of the plurality of
subregions comprising local views and global views; and
a plurality of multimodal sensors positioned to capture the
sensor data comprising a plurality of parameters associated
with the plurality of subregions;
preprocessing (204), via the one or more hardware processors, the
camera data and the sensor data to eliminate noise, wherein the camera data
is preprocessed using a video stabilization approach to eliminate noise
induced in the camera data due to undesired motion of the plurality of
multimodal cameras;
deriving (206), via one or more hardware processors, a flow vector information from the visual image frames and thermal image frames to extract temporal features, of turbid regions of the one or more turbid media within every image frame, from the flow vector information;
processing (208), via the one or more hardware processors, the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions corresponding to a turbid

media of interest among the one or more turbid media present in the ROI, wherein the plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest;
clustering (210), via the one or more hardware processors, the plurality of turbid regions into a plurality of clusters based on the temporal features and a thermal profile obtained from the thermal image frames, wherein each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions;
fitting (212), via the one or more hardware processors, a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each turbid region, wherein each minimum point is identified by corresponding location coordinates;
grouping (214), via the one or more hardware processors, the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest, wherein each source location among the plurality of source locations corresponds to one or more minimum points which have mapping location coordinates;
monitoring and detecting (216), via the one or more hardware processors, temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of the ROI obtained from the thermal cameras; and
predicting (218), via the one or more hardware processors, one or more source locations among the plurality of source locations as target locations for occurrence of events based on a prediction model, wherein the prediction model utilizes a clustered historical data the plurality of source locations, the detected temporal variations and the thermal profile of each frame for prediction.

2. The method as claimed in claim 1, wherein the method comprises stabilizing an unstable source location, among the plurality of source locations, comprising a non-continuous turbid activity by processing the source location using a moving window approach.
3. The method as claimed in claim 1, wherein the clustered historical data is generated by clustering a historical data corresponding to the ROI, wherein the historical data comprises time series data providing variations of temperature profile in past and temporal signatures of past events.
4. The method as claimed in claim 3, wherein generating the clustered historical data comprises:
extracting the temporal signatures of past events from the time series data; and
clustering multiple turbid media in a single cluster based on the temporal signature of the past events and distance between each turbid media.
5. The method as claimed in claim 1, wherein the method further comprises obtaining the flow vector information by utilizing flow cameras.
6. A system (100) for monitoring of a turbid media of interest to predict outbursts, the system (100) comprising:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the
one or more I/O interfaces (106), wherein the one or more hardware
processors (104) are configured by the instructions to:
receive input data comprising camera data and sensor data from a multimodal sensing unit providing long distance surveillance, wherein the input data provides information of one or more turbid media present in a

Region of Interest (ROI), and wherein the multimodal sensing unit comprises:
a plurality of multimodal cameras, comprising a combination of portable and fixed visual cameras and thermal cameras, covering a plurality of subregions within the ROI, and positioned to capture and stream the camera data comprising visual image frames and thermal image frames, wherein the camera data provides multiple views of the plurality of subregions comprising local views and global views; and a plurality of multimodal sensors positioned to capture the sensor data comprising a plurality of parameters associated with the plurality of subregions; preprocess the camera data and the sensor data to eliminate noise, wherein the camera data is preprocessed using a video stabilization approach to eliminate noise induced in the camera data due to undesired motion of the plurality of multimodal cameras;
derive a flow vector information from the visual image frames and the thermal image frames to extract temporal features, of turbid regions of the one or more turbid media within every image frame, from the flow vector information;
process the sensor data and the temporal features extracted from the flow vector information to identify a plurality of turbid regions corresponding to a turbid media of interest among the one or more turbid media present in the ROI, wherein the plurality of turbid regions corresponding to the turbid media of interest are identified based on a set of features uniquely defining the turbid media of interest;
cluster the plurality of turbid regions into a plurality of clusters based on the temporal features and a thermal profile obtained from the thermal image frames, wherein each cluster among the plurality of clusters comprises one or more turbid regions among the plurality of turbid regions;

fit a convex hull to each turbid region of each cluster to determine a plurality of minimum points of the convex hull fitted to each turbid region, wherein each minimum point is identified by corresponding location coordinates;
group the plurality of minimum points in accordance to corresponding location coordinates to identify a plurality of source locations of a plurality of turbid activities corresponding to the turbid media of interest, wherein each source location among the plurality of source locations corresponds to one or more minimum points which have mapping location coordinates;
monitor temporal variations occurring in the plurality of turbid activities corresponding to the plurality of source locations by performing frame by frame analysis of the temporal features extracted from flow vector information and corresponding thermal profile of the ROI obtained from the thermal cameras; and
predict one or more source locations among the plurality of source locations as target locations for occurrence of events based on a prediction model, wherein the prediction model utilizes a clustered historical data the plurality of source locations, the detected temporal variations and the thermal profile of each frame for prediction.
7. The system (100) as claimed in claim 6, wherein the one or more hardware processors (104) are configured to stabilize an unstable source location, among the plurality of source locations, comprising a non-continuous turbid activity by processing the source location using a moving window approach.
8. The system (100) as claimed in claim 6, wherein the one or more hardware processors (104) are configured to generate the clustered historical data by clustering a historical data corresponding to the ROI, wherein the historical data comprises time series data providing variations of temperature profile in past and temporal signatures of past events.

9. The system (100) as claimed in claim 8, wherein the one or more hardware processors (104) are configured to generate the clustered historical data by:
extracting the temporal signatures of past events from the time series data; and
clustering multiple turbid media in a single cluster based on the temporal signature of the past events and distance between each turbid media.

Documents

Orders

Section Controller Decision Date
43(1) Shubham Upadhyay 2025-09-24
43(1) Shubham Upadhyay 2025-09-24

Application Documents

# Name Date
1 202021004160-ABSTRACT [17-05-2022(online)].pdf 2022-05-17
1 202021004160-FORM 3 [20-02-2025(online)].pdf 2025-02-20
1 202021004160-STATEMENT OF UNDERTAKING (FORM 3) [30-01-2020(online)].pdf 2020-01-30
2 202021004160-CLAIMS [17-05-2022(online)].pdf 2022-05-17
2 202021004160-Information under section 8(2) [20-02-2025(online)].pdf 2025-02-20
2 202021004160-REQUEST FOR EXAMINATION (FORM-18) [30-01-2020(online)].pdf 2020-01-30
3 202021004160-FER_SER_REPLY [17-05-2022(online)].pdf 2022-05-17
3 202021004160-FORM 18 [30-01-2020(online)].pdf 2020-01-30
3 202021004160-PETITION UNDER RULE 137 [20-02-2025(online)].pdf 2025-02-20
4 202021004160-RELEVANT DOCUMENTS [20-02-2025(online)].pdf 2025-02-20
4 202021004160-FORM 1 [30-01-2020(online)].pdf 2020-01-30
4 202021004160-FER.pdf 2021-12-06
5 202021004160-Written submissions and relevant documents [20-02-2025(online)].pdf 2025-02-20
5 202021004160-FIGURE OF ABSTRACT [30-01-2020(online)].jpg 2020-01-30
5 202021004160-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(25-02-2021).pdf 2021-02-25
6 202021004160-FORM-26 [31-01-2025(online)].pdf 2025-01-31
6 202021004160-DRAWINGS [30-01-2020(online)].pdf 2020-01-30
6 202021004160-Covering Letter [23-02-2021(online)].pdf 2021-02-23
7 202021004160-Form 1 (Submitted on date of filing) [23-02-2021(online)].pdf 2021-02-23
7 202021004160-DECLARATION OF INVENTORSHIP (FORM 5) [30-01-2020(online)].pdf 2020-01-30
7 202021004160-Correspondence to notify the Controller [30-01-2025(online)].pdf 2025-01-30
8 202021004160-COMPLETE SPECIFICATION [30-01-2020(online)].pdf 2020-01-30
8 202021004160-Power of Attorney [23-02-2021(online)].pdf 2021-02-23
8 202021004160-US(14)-HearingNotice-(HearingDate-06-02-2025).pdf 2025-01-16
9 202021004160-ABSTRACT [17-05-2022(online)].pdf 2022-05-17
9 202021004160-Proof of Right [14-05-2020(online)].pdf 2020-05-14
9 Abstract1.jpg 2020-02-06
10 202021004160-CLAIMS [17-05-2022(online)].pdf 2022-05-17
10 202021004160-FORM-26 [24-03-2020(online)].pdf 2020-03-24
11 202021004160-FER_SER_REPLY [17-05-2022(online)].pdf 2022-05-17
11 202021004160-Proof of Right [14-05-2020(online)].pdf 2020-05-14
11 Abstract1.jpg 2020-02-06
12 202021004160-COMPLETE SPECIFICATION [30-01-2020(online)].pdf 2020-01-30
12 202021004160-FER.pdf 2021-12-06
12 202021004160-Power of Attorney [23-02-2021(online)].pdf 2021-02-23
13 202021004160-Form 1 (Submitted on date of filing) [23-02-2021(online)].pdf 2021-02-23
13 202021004160-DECLARATION OF INVENTORSHIP (FORM 5) [30-01-2020(online)].pdf 2020-01-30
13 202021004160-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(25-02-2021).pdf 2021-02-25
14 202021004160-Covering Letter [23-02-2021(online)].pdf 2021-02-23
14 202021004160-DRAWINGS [30-01-2020(online)].pdf 2020-01-30
15 202021004160-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(25-02-2021).pdf 2021-02-25
15 202021004160-FIGURE OF ABSTRACT [30-01-2020(online)].jpg 2020-01-30
15 202021004160-Form 1 (Submitted on date of filing) [23-02-2021(online)].pdf 2021-02-23
16 202021004160-FER.pdf 2021-12-06
16 202021004160-FORM 1 [30-01-2020(online)].pdf 2020-01-30
16 202021004160-Power of Attorney [23-02-2021(online)].pdf 2021-02-23
17 202021004160-Proof of Right [14-05-2020(online)].pdf 2020-05-14
17 202021004160-FER_SER_REPLY [17-05-2022(online)].pdf 2022-05-17
17 202021004160-FORM 18 [30-01-2020(online)].pdf 2020-01-30
18 202021004160-REQUEST FOR EXAMINATION (FORM-18) [30-01-2020(online)].pdf 2020-01-30
18 202021004160-FORM-26 [24-03-2020(online)].pdf 2020-03-24
18 202021004160-CLAIMS [17-05-2022(online)].pdf 2022-05-17
19 202021004160-ABSTRACT [17-05-2022(online)].pdf 2022-05-17
19 202021004160-STATEMENT OF UNDERTAKING (FORM 3) [30-01-2020(online)].pdf 2020-01-30
19 Abstract1.jpg 2020-02-06
20 202021004160-COMPLETE SPECIFICATION [30-01-2020(online)].pdf 2020-01-30
20 202021004160-US(14)-HearingNotice-(HearingDate-06-02-2025).pdf 2025-01-16
21 202021004160-Correspondence to notify the Controller [30-01-2025(online)].pdf 2025-01-30
21 202021004160-DECLARATION OF INVENTORSHIP (FORM 5) [30-01-2020(online)].pdf 2020-01-30
22 202021004160-DRAWINGS [30-01-2020(online)].pdf 2020-01-30
22 202021004160-FORM-26 [31-01-2025(online)].pdf 2025-01-31
23 202021004160-FIGURE OF ABSTRACT [30-01-2020(online)].jpg 2020-01-30
23 202021004160-Written submissions and relevant documents [20-02-2025(online)].pdf 2025-02-20
24 202021004160-FORM 1 [30-01-2020(online)].pdf 2020-01-30
24 202021004160-RELEVANT DOCUMENTS [20-02-2025(online)].pdf 2025-02-20
25 202021004160-FORM 18 [30-01-2020(online)].pdf 2020-01-30
25 202021004160-PETITION UNDER RULE 137 [20-02-2025(online)].pdf 2025-02-20
26 202021004160-Information under section 8(2) [20-02-2025(online)].pdf 2025-02-20
26 202021004160-REQUEST FOR EXAMINATION (FORM-18) [30-01-2020(online)].pdf 2020-01-30
27 202021004160-FORM 3 [20-02-2025(online)].pdf 2025-02-20
27 202021004160-STATEMENT OF UNDERTAKING (FORM 3) [30-01-2020(online)].pdf 2020-01-30
28 202021004160-PatentCertificate24-09-2025.pdf 2025-09-24
29 202021004160-IntimationOfGrant24-09-2025.pdf 2025-09-24

Search Strategy

1 202021004160E_25-10-2021.pdf

ERegister / Renewals

3rd: 01 Oct 2025

From 30/01/2022 - To 30/01/2023

4th: 01 Oct 2025

From 30/01/2023 - To 30/01/2024

5th: 01 Oct 2025

From 30/01/2024 - To 30/01/2025

6th: 01 Oct 2025

From 30/01/2025 - To 30/01/2026