Abstract: State of the art techniques require expensive ultrahigh speed camera and consume large resources for detection of object. Embodiments of the present disclosure provide method and system for line contour and color gradient contour-based detection of object. A single standard camera captures image and simultaneously obtains environmental factors with reference to GPS coordinates of the capture. Thermalized image is generated from the input image, which is processed in parallel. Distortion in the image is computed in terms of drag index in accordance with the environmental factors. Summed differential blur is derived from the drag index to obtain undistorted reconstructed images, which are processed using iterative process creating a line contour and a color gradient contour in each iteration, wherein partial images generated between first and new contour are analyzed by ML model to predict object in each loop, which continues till the prediction error coefficient reaches minimal value.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR LINE CONTOUR AND COLOR GRADIENT CONTOUR BASED DETECTION OF OBJECT
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The embodiments herein generally relate to automated detection of objects in motion and, more particularly, to a method and system for line contour and color gradient contour based detection of object.
BACKGROUND
[002] Automated accurate detection of objects in motion is a challenging task. Captured images of objects may have blur due to pixel drag, introduced as an effect high speed of objects. Also, environmental factors such as fog, rain, low illumination, mist and so on can affect visibility during image capture. Moreover, these environmental factors are not standard, rather are captured location dependent considering microclimates.
[003] To address issues of blur, loss of clarity in the captured images due to speed of objects, visibility due to environmental factors, state of the art approaches install higher specification cameras such as ultrahigh speed cameras, high resolution camera’s, night vision cameras and so on for object detection. However, the higher specification cameras are very expensive, and each type serves only a single purpose, thus requiring multiple cameras of different specifications or types to address image capture quality issues due to multiple factors at the location of capture. This is obviously expensive and not a practical solution. Further, to identify objects from captured images, conventional approach parses images, converts into arrays for color pixels, extracts objects and then matches images. This is quite computationally and memory intensive, time consuming and storage consuming approach.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[005] For example, in one embodiment, a method for line contour and color gradient contour based detection of object is provided. The method receives, an input image of an object in a motion, captured via a Capture Camera (CC), deployed at a location, wherein the CC captures a plurality of images at one of i) predefined time intervals, and ii) on receiving a trigger from Passive Infrared (PIR) sensors. Further, generate a line contour image and a thermalized color gradient contour image from the input image. Further, dynamically computes a disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capturing of the input image. The one or more current environmental factors are obtained from the plurality of sensing devices deployed along with the CC and a plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates of the CC. Thereafter, determines a pixel drag for the input image, determine an actual Field of Object (FoO) from the input image, determine an expected FoO based on the actual FoO, the disturbance index and the pixel drag. Furthermore, dynamically computes a drag index in terms of summed differential blur of the actual FoO and the expected FoO. The summed differential blur is computed by representing the input image in a 3-dimensional pixel space by using a multi-dimensional matrix. Thereafter, obtains a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index and detecting the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image by iteratively creating and comparing one by one successive line contours and color gradient contours. The pixel locations corresponding to the line- contours and the color-gradient contours are stored and used for detecting the object. The iterative process comprising: a) detecting an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image; b) locating a plurality of trajectory start points positioned equidistantly on the object boundary
of the reconstructed line contour image and the reconstructed thermalized color gradient contour image; c) determining a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location of highest crest or trough on the reconstructed line contour image and the reconstructed thermalized color gradient contour image; d) creating a current line contour and a current color-gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories; e) creating a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line contour and the current color gradient contour are stored for object detection; f) slicing, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color gradient contour to generate a current color gradient-based sliced image, wherein the sliced area increases in each iteration as the contour creation approaches the central location; g) analyzing each of the current line contour-based sliced image and the current color gradient based sliced image by a Machine Learning (ML) model trained to predict an object type in the input image into one among a plurality of object types; h) determining a mismatch by superimposing pixels corresponding to the current line contour based sliced image and the current color gradient based sliced image with one or more pre-identified objects having gradients and curvatures with index corrections applied; i) computing an error coefficient of the ML model for the predicted object type based on an error, as a result of the deviation from among the pre-identified objects; j) detecting the object in the input image to be of the predicted object type by the ML model if the error coefficient reaches a minimal value; and k) repeating the iterative process by generating a second successive line
contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value.
[006] The method further comprises validating the predicted object type using a validation image captured by the validation Camera (VC), the validating comprising: a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index; b) rotating the VC using a stepper motor in accordance with the reverse spin to a set a VC capture position; c) capturing the validation image from the VC capture position; and d) overlaying the detected object generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
[007] The method further comprises updating a central system with the computed drag index for the CC, wherein the drag index is reused for one or more CCs in a predefined radius of the CC defining a grid, if one or more controllers corresponding to the one or more CCs trigger a request for drag index computation within a predefined time span of the drag index computation, eliminating repeated drag index computation.
[008] Furthermore, the method comprises one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
[009] In another aspect, a system for line contour and color gradient contour based detection of object is provided. The system comprising: a Capture Camera (CC), a Controller, a plurality of sensing devices, Passive Infrared (PIR) sensors, a validation Camera (VC),a stepper motor, and a power source. The controller comprises a memory storing instructions, one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to receive, an input image of an object in a motion, captured via
the CC, deployed at a location, wherein the CC captures a plurality of images at one of i) predefined time intervals, and ii) on receiving a trigger from Passive Infrared (PIR) sensors. Further, generate a line contour image and a thermalized color gradient contour image from the input image. Further, dynamically compute a disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capturing of the input image. The one or more current environmental factors are obtained from the plurality of sensing devices deployed along with the CC and a plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates of the CC. Thereafter, determine a pixel drag for the input image, determine an actual Field of Object (FoO) from the input image, determine an expected FoO based on the actual FoO, the disturbance index and the pixel drag. Furthermore, dynamically compute a drag index in terms of summed differential blur of the actual FoO and the expected FoO. The summed differential blur is computed by representing the input image in a 3-dimensional pixel space by using a multi-dimensional matrix. Thereafter, obtaining a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index and detecting the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image by iteratively creating and comparing one by one successive line contours and color gradient contours. The pixel locations corresponding to the line- contours and the color-gradient contours are stored and used for detecting the object. The iterative process comprising: a) detecting an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image; b) locating a plurality of trajectory start points positioned equidistantly on the object boundary of the reconstructed line contour image and the reconstructed thermalized color gradient contour image; c) determining a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location of highest crest or trough on the reconstructed line contour image and the reconstructed thermalized color gradient contour image; d) creating a
current line contour and a current color-gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories; e) creating a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line contour and the current color gradient contour are stored for object detection; f) slicing, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color gradient contour to generate a current color gradient-based sliced image, wherein the sliced area increases in each iteration as the contour creation approaches the central location; g) analyzing each of the current line contour-based sliced image and the current color gradient based sliced image by a Machine Learning (ML) model trained to predict an object type in the input image into one among a plurality of object types; h) determining a mismatch by superimposing pixels corresponding to the current line contour based sliced image and the current color gradient based sliced image with one or more pre-identified objects having gradients and curvatures with index corrections applied; i) computing an error coefficient of the ML model for the predicted object type based on an error, as a result of the deviation from among the pre-identified objects; j) detecting the object in the input image to be of the predicted object type by the ML model if the error coefficient reaches a minimal value; and k) repeating the iterative process by generating a second successive line contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value.
[0010] The system further comprises validating the predicted object type using a validation image captured by the validation Camera (VC), the validating comprising: a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index; b) rotating the VC using a stepper motor in accordance with the reverse spin to a set a
VC capture position; c) capturing the validation image from the VC capture position; and d) overlaying the detected object generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
[0011] The system via the one or more hardware processors is further configured to update a central system with the computed drag index for the CC, wherein the drag index is reused for one or more CCs in a predefined radius of the CC defining a grid, if one or more controllers corresponding to the one or more CCs trigger a request for drag index computation within a predefined time span of the drag index computation, eliminating repeated drag index computation.
[0012] Furthermore, one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
[0013] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors causes a method for line contour and color gradient contour based detection of object .
[0014] The method receives, an input image of an object in a motion, captured via a Capture Camera (CC), deployed at a location, wherein the CC captures a plurality of images at one of i) predefined time intervals, and ii) on receiving a trigger from passive infrared (PIR) sensors. Further, generate a line contour image and a thermalized color gradient contour image from the input image. Further, dynamically computes a disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capturing of the input image. The one or more current environmental factors are obtained from the plurality of sensing devices deployed along with the CC and a plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates of the CC. Thereafter, determines a pixel drag for the input image, determine an actual Field of Object (FoO) from the input
image, determine an expected FoO based on the actual FoO, the disturbance index and the pixel drag. Furthermore, dynamically computes a drag index in terms of summed differential blur of the actual FoO and the expected FoO. The summed differential blur is computed by representing the input image in a 3-dimensional pixel space by using a multi-dimensional matrix. Thereafter, obtains a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index and detecting the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image by iteratively creating and comparing one by one successive line contours and color gradient contours. The pixel locations corresponding to the line- contours and the color-gradient contours are stored and used for detecting the object. The iterative process comprising: a) detecting an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image; b) locating a plurality of trajectory start points positioned equidistantly on the object boundary of the reconstructed line contour image and the reconstructed thermalized color gradient contour image; c) determining a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location of highest crest or trough on the reconstructed line contour image and the reconstructed thermalized color gradient contour image; d) creating a current line contour and a current color-gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories; e) creating a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line contour and the current color gradient contour are stored for object detection; f) slicing, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color
gradient contour to generate a current color gradient-based sliced image, wherein the sliced area increases in each iteration as the contour creation approaches the central location; g) analyzing each of the current line contour-based sliced image and the current color gradient based sliced image by a Machine Learning (ML) model trained to predict an object type in the input image into one among a plurality of object types; h) determining a mismatch by superimposing pixels corresponding to the current line contour based sliced image and the current color gradient based sliced image with one or more pre-identified objects having gradients and curvatures with index corrections applied; i) computing an error coefficient of the ML model for the predicted object type based on an error, as a result of the deviation from among the pre-identified objects; j) detecting the object in the input image to be of the predicted object type by the ML model if the error coefficient reaches a minimal value; and k) repeating the iterative process by generating a second successive line contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value.
[0015] The method further comprises validating the predicted object type using a validation image captured by the validation Camera (VC), the validating comprising: a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index; b) rotating the VC using a stepper motor in accordance with the reverse spin to a set a VC capture position; c) capturing the validation image from the VC capture position; and d) overlaying the detected object generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
[0016] The method further comprises updating a central system with the computed drag index for the CC, wherein the drag index is reused for one or more CCs in a predefined radius of the CC defining a grid, if one or more controllers corresponding to the one or more CCs trigger a request for drag index computation within a predefined time span of the drag index computation, eliminating repeated drag index computation.
[0017] Furthermore, the method further comprises one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
[0018] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[0020] FIG. 1A is a block diagram of a system for line contour and color gradient contour based detection of an object, in accordance with some embodiments of the present disclosure.
[0021] FIG. 1B is a functional block diagram of a controller of the system that controls the line contour and color gradient contour based detection of the object, in accordance with some embodiments of the present disclosure.
[0022] FIG. 1C illustrates an architectural and process overview of the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0023] FIGS. 2A through 2C ( collectively referred as FIG. 2) is a flow diagram illustrating a method for line contour and color gradient contour based detection of the object, using the system of FIG. 1, in accordance with some embodiments of the present disclosure.
[0024] FIG. 3 is an example illustrating actual Field of Object (FoO) and an expected FoO computed by the controller of FIG. 2 for determining summed differential blur in the object captured in an input image, in accordance with some embodiments of the present disclosure.
[0025] FIGS. 4A is an example captured object image using the summed differential blur and FIG. 4B depicts line contours created on a reconstructed line contour image using an iterative process, in accordance with some embodiments of the present disclosure.
[0026] FIGS. 5A is an example thermalized image using the differential blur and FIG. 5B depicts color gradient contours created on a reconstructed thermalized image using the iterative process, in accordance with some embodiments of the present disclosure.
[0027] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION OF EMBODIMENTS
[0028] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[0029] State of the art techniques require expensive ultrahigh speed camera and consume large resources for detection of object in motion. Embodiments of the present disclosure provide method and system for line contour and color gradient contour based detection of object. A single standard camera captures image and simultaneously obtains environmental factors with reference to GPS coordinates of the capture. The standard camera herein refers to a commonly used and easily
available, low priced, commodity hardware based camera. Thermalized image is generated from the input image, which are processed in parallel. Distortion in the image is computed in terms of drag index in accordance with the environmental factors. Summed differential blur is derived from the drag index to obtain undistorted reconstructed images, which are processed using iterative process creating a line contour and a color gradient contour in each iteration, wherein partial images are generated and streamed between first and new contour and are analyzed by machine learning (ML) models to predict the object in each loop, which continues till the prediction error coefficient reaches minimal value. The method disclosed herein consumes less computational power, time, storage, and memory since processing focuses on contour based image slicing and not the entire image. Moreover, the ML model constantly learns from its history of contours and improves the prediction accuracy. ML model is the art such as SVM, binary classifier, Random Forest classifiers and the like can be used. The most appropriate ML model to be used is selected based on the picture captured, environment etc. There are two mechanisms to select the best ML model.
[0030] Algorithms for the ML models can be triggered in parallel when in doubt of getting the best results given the various indexes computed, which is a reflection of, various input parameters captured as a part of the environmental aspects and quality of picture captured.
[0031] Based on the stored benchmarks of results achieved and then mapping it to the current conditions captured and indexes computed, in which the algorithm ML model had yielded the best results in the past, is directly utilized
[0032] This benchmark information is stored in the databases as depicted in FIG. 1C and information is retrieved from there.
[0033] Referring now to the drawings, and more particularly to FIGS. 1 through 5B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[0034] FIG. 1A is a block diagram of a system 100 for line contour and color gradient contour based detection of an object, in accordance with some embodiments of the present disclosure. The system 100 comprises following component as depicted.
[0035] Capture Camera (CC): This is a low-resolution camera with basic night vision capability, which is less expensive, easily available, and replaceable, and with minimum size of images with standard resolution. With standard resolution, image size is low requiring lesser memory and computation resources. The CC captures input images of moving objects. This process is triggered by the Passive infrared sensors (PIR) sensor and/or is programmed to trigger in a given interval. The PIR, GPS (Global positioning system), light, Moisture sensing module, dust particles sensing module, and temperature sensing/measuring modules installed are ultra-powered devices. The GPS enables to provide exact GPS coordinates of the system 100, which may be for example installed at a traffic junction, or the like for monitoring vehicle movement. Light, moisture, dust particles, temperature sensing devices (or modules) deployed along with the CC capture the corresponding data on the GPS coordinate at a given time. Alternatively, weather data associated with the GPS coordinates is obtained via a plurality of available resources, such as third parties via edge computing, when triggered by the system 100. The weather data is collected on light, moisture, dust particles, temperature on that GPS coordinate and around it in a radius of 1-2kms. This helps to predict the overall disturbance in terms of drag index and eliminates similar computation to be done by similar systems in the same vicinity. When there is a trigger on the system 100, the instructions are sent with the GPS coordinates to an edge computing node, which checks whether there was a recent trigger from any of the other systems in the predefined radius (grid). Thus, if another trigger was received from similar system installed within the radius, the edge node does not initiate a new request to a third party and the prior received information from the third party is used as the environmental factors or weather data does not change. However, if the edge node received request from a similar system outside the radius (grid) defined, the new request for weather data is initiated. This
grid check helps optimizing(i) computation, (ii)storage of the data, and (iii) costs being incurred from the third party data providers.
[0036] In an embodiment, the one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
[0037] Controller (102): Comprises, for example, a microcontroller for computing logic and functions as an IoT (Internet of Things) device and is capable for connecting to the edge computing available via a network connection module. This module transmits/receives data and can leverage edge computing, by transferring data to any central systems for calculation and storage.
[0038] Battery compartment: This module includes a battery that provides power to the system 100. Since all components of the system 100 are extreme low powered devices, the power consumption is less. An option to plug an alternative source of power supply if available is provided adding convenience to a user during the installation process. For example, if mounted on a traffic signal, power supply can be plugged in from the supply to the signal or in case of an installation where there is no sunlight to recharge the batteries, it can be plugged into a power source.
[0039] Solar Panel: This is used to recharge a power source of the system 100, the power source may include rechargeable battery, non rechargeable battery, direct power supply making the system 100 self-sufficient, and sustainable in nature.
[0040] Validation Camera (VC): This camera has the same technical specifications as that of the CC. This camera is an additional camera connected to the stepper motor via a gear which is in turn connected to the controller 102 to get the instructions. The instructions sent by the controller 102 after the calculation of the drag index, is translated to a reverse spin co-efficient which directs the movement of camera. The spin co-efficient signifies the direction of the rotation of the camera, the degree of the rotation and the speed for the rotation. For example, a drag index = -0.02, is translated to spin co-efficient = left by 2.2 degree with a speed of rotation = 10 degree per second. Use case: A car is travelling from left to right (relative to
the placement of the camera) on a road and the picture is taken in the night. A drag is observed due to low light, environment conditions and speed of the car. If the camera is moved in the same direction to that of the direction of motion of car, the drag in the image is reduced/eliminated and the original image without the drag can be re-constructed. This is then used for validation as per the original picture taken by the CC and contour and thermalized bands extracted.
[0041] Stepper Motor- Receives spin co-efficient from the
processor/controller 102 and moves the VC in that direction, by the instructed degree of rotation and the speed of rotation.
[0042] FIG. 1B is a functional block diagram of a controller 102 of the system 100 that controls the line contour and color gradient contour based detection of the object, in accordance with some embodiments of the present disclosure.
[0043] In an embodiment, the controller 102 includes a processor(s) 104, communication interface device(s), alternatively referred as input/output (I/O) interface(s) 106, and one or more data storage devices or a memory 110 operatively coupled to the processor(s) 104. The controller 102 with one or more hardware processors is configured to execute functions of one or more functional blocks of the controller 102.
[0044] Referring to the components of controller 102, in an embodiment, the processor(s) 104, can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 110. In an embodiment, the controller 102 can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, and the like.
[0045] The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface and the like. Further, can facilitate multiple communications within a wide variety of
networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular and the like. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting to a number of external devices or to another server or devices such as components of the system 100 depicted in FIG. 1A and edge computing nodes.
[0046] The memory 110 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
[0047] Further, the memory 102 includes a database 108 for storage of data such as captured input images, intermediated processed images, and the like. Further, the memory 102 includes modules such as the machine learning (ML) model (not shown) that is trained for object detection using line based contours and the color gradient contours using an iterative process disclosed herein by the method. Further, the memory 110 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system100 and methods of the present disclosure. In an embodiment, the database 108 may be external (not shown) to the system 100 and coupled to the system 100 via the I/O interface 106. Functions of the components of the system 100 are explained in conjunction with FIGS. 1C through 5B.
[0048] FIG. 1C illustrates an architectural and process overview of the system 100 of FIG. 1A, in accordance with some embodiments of the present disclosure. The architecture can be understood in conjunction with steps of method of FIG. 2.
[0049] FIGS. 2A through 2C ( collectively referred as FIG. 2) is a flow diagram illustrating a method 200 for line contour and color gradient contour based detection of the object, using the system 100 of FIG. 1, in accordance with some embodiments of the present disclosure.
[0050] In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and
is configured to store instructions for execution of steps of the method 200 by the processor(s) or one or more hardware processors 104. The steps of the method 200 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in FIG. 1A through 1C and the steps of flow diagram as depicted in FIG. 2 with illustrative examples in FIG. 3 through 5B. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps to be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[0051] Referring to the steps of the method 200, at step 202 of the method 200, the one or more hardware processors 104 receive the input image of an object in a motion. As depicted in FIG. 1C, the input image is captured via the CC deployed in the system 100 set up at a location. The CC captures images at one of i) predefined time intervals, and ii) on receiving a trigger from the PIR sensors. At step 204, the one or more hardware processors 104, generate a line contour image and a thermalized color gradient contour image from the input image. The line contour image and thermalization can be performed using one or more thermalization techniques as known in the art. The thermalized image is created based on the reflection light captured from the surface with the disturbance. At step 206, the one or more hardware processors 104, dynamically compute the disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capture of the input image. Tools know in art that are used to compute disturbance index, from the external factors. As mentioned the environmental factors are obtained from a plurality of sources ( such as third party providers) with reference to GPS coordinates of the CC. At step 208, the one or more hardware processors 104, determine a pixel drag for the input image. At step 210, the one or more hardware processors 104, determine an actual Field of Object (FoO) from the input image and at step 212 determine an expected FoO based on the actual FoO, the disturbance index and the pixel drag. Tools know in art that are used to
compute pixel drag in the image. At step 214, the one or more hardware processors 104 dynamically compute the drag index in terms of the summed differential blur of the actual FoO and the expected FoO. The summed differential blur is computed by representing the input image in a 3-dimensional pixel space by using a multi-dimensional matrix. The summed differential blur concept is explained with an example. When the input image is captured, its captured in 2D and is planar. The multi-dimensional matrix is used, which helps to capture image information across 3 dimensions, which enables to figure out the shape of the object better and faster. Like, in the example of the car of FIG. 3, the 3D space representation of the image enables to extract height, width, curvature near the bends or edges etc., of the car. This helps in line contouring and color gradient contouring process. By plotting the coordinates, angle of picture, distance etc, the 3D specs are translated to 2D. This information is available to machine learning algorithms, applied as bias for accurate prediction. In an application of the system 100, if there is a damage in a car which is captured by the camera during an accident, these 3D space representation details will help to assess the extent of damage and severity of the accident. Analyzing the damage, the system 100 can initiate emergency process and can call emergency services, like ambulance or fire brigade in case of fire etc. and inform the police. The details can be sent to the insurance company directly based on the car number plates recognized.
[0052] The computed disturbance index and the drag index are stored in the database 108, as depicted in FIG. 1C.
[0053] FIG. 3 is an example illustrating actual Field of Object (FoO) and an expected FoO, computed by the controller of FIG. 2, for determining summed differential blur in the object captured in the input image, in accordance with some embodiments of the present disclosure. Since the input image captured may be affected by disturbance caused by external factors, this reflects in pixel distortion and drag. The illustration in FIG. 3 shows the actual field of vision of the CC, where the complete image frame is captured. From within that, the actual field of the object(FoO) is extracted. With the computed disturbance index and the pixel drag, the expected field of actual object (expected FoO) after processing is determined.
The summed differential blur is the fed back as a feedback loop for the determination of the image in the contour lines since the lines is skewed due to this factor and is also fed back to the machine learning algorithm(s). The pixel drag and the disturbance index also helps determine the approximate speed of the moving object using the plotted coordinates, angle of picture, distance etc. as mentioned above.
[0054] Once the drag index is computed, at step 216, the one or more hardware processor 104 obtain a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index. At step 218, the one or more hardware processor 104 detect the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image, by iteratively creating and comparing one by one successive line contours and color gradient contours. The pixel locations corresponding to the line contours and the color gradient contours are stored and used for detecting the object. The iterative process steps are as explained below:
a) Detect an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218a). A captured object image is depicted in FIG. 4A, and the detected boundaries are depicted in FIG. 4B. Similarly, a thermalized image is depicted in FIG. 5A and the boundaries of the object in the thermalized image is depicted in FIG. 5B.
b) Locate a plurality of trajectory start points positioned equidistantly on the object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218b). The trajectory start points are depicted in FIG. 4B and 5B.
c) Determine a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location on the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218c). The convergence is also depicted in FIG. 4B and 5B. Further, depth of each of the plurality of trajectories of contour convergence in the reconstructed object image and the
reconstructed thermalized image is computed. The depth is used to calculate the number of contours needed and the iterations needs to reach the end of the convergence point.
d) Create a current line contour and a current color gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories (218d).
e) Create a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line contour and the current color gradient contour are stored for object detection (218e). The color gradient contours are marked based on change in color gradient of the thermalized image as seen in FIG. 5B.
f) Slice, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color gradient contour to generate a current color-gradient-based sliced image. The sliced area increases in each iteration as the contour creation approaches the central location.
g) Analyze each of the current line contour based sliced image and the current color gradient-based sliced image by the ML model trained to predict an object type in the input image into one among the plurality of object types (218g). The most appropriate ML model to be used is selected based on the picture captured, environment etc . There are two mechanisms to select the best ML model.
1) Algorithms for the ML models can be triggered in parallel when in doubt of getting the best results given the various indexes computed, which is reflection of various input parameters captured as a part of the environmental aspects and quality of picture captured.
2) Based on the stored benchmarks of results achieved and then mapping it to the current conditions captured and indexes computed, in which the algorithm ML model had yielded the best results in the past, is directly utilized. The machine learning algorithm is trained using exhaustive data sets starting from minimal contours to maximum contours plotted towards the center of the object. The varied method of training helps detection faster, learn better, utilize lesser computations and memory, thus making it a more sustainable solution, and comply to various regulatory requirements like GDPR, etc. This approach keeps privacy and security central. By slicing the input image from a contour perspective and processing only the sliced parts enables considerable reduction in amount of memory and compute needed.
h) Determine a mismatch by superimposing pixels corresponding to the
current line contour-based sliced image and the current color-gradient-based sliced image with one or more pre-identified objects and gradients and curvatures with index corrections applied (218h). The object templates of interest are stored in the database 108. The identification is done by superimposition of pixels over similar object types and object references. For example, assume, there is a car of type SUV. So, a SUV outline image is prior available as template with the system 100 in its database along with curvature information from different angles of view of that car. Thus, such a prior known outline can be dropped onto a blurred out image of a similar image of a car, and then attempt can be made to identify the car and/or draw the outline or correct the outline of the target image or try to find out the gap in the actual outline vs the captured outline of the car.
i) Compute an error coefficient of the ML model for the predicted
object type based an error as a result of the deviation from among the pre-identified objects (218i).
j) Detect the object in the input image to be of the predicted object type
by the ML model if the error coefficient reaches a minimal value (218j). The minimal value refers to that point of error coefficient after which the
iterations when performed, the error coefficient starts increasing. Any
techniques known in the art such as stochastic approaches or gradient decent approaches can be used to detect the error coefficient and track to determine the point of minimal error coefficient. Based on the nature of object traversal trajectory, a choice between approaches can be made
k) Repeat the iterative process by generating a second successive line
contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value (218k).
[0055] This method 200 focuses on privacy and security of data. Thus, once the detection is done, the iterations stops immediately and only the trajectory information, contour tracing equation and steps are stored in the permanent storage and the image etc. is purged from the temporary memory. This also helps optimize storage volume and costs. The machine learning utilizes this stored information of trajectory, contour tracing equations and steps for further training of the ML algorithm, apart from usage of actual images. This information also helps in superimposition of different contours for mapping and matching.
[0056] As depicted in FIG. 1C, the method further performs validation of the predicted object type using a validation image captured by a validation Camera (VC). The validation steps include:
a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index;
b) rotating the VC using the stepper motor, as depicted in FIG. 1C, in accordance with the reverse spin to a set a VC capture position;
c) capturing the validation image from the VC capture position; and
d) overlaying the detected object in terms of line contours and color gradient contours generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
[0057] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0058] Images depicted in FIGS. 3 through 5B are from public database and/or created by the system 100 while executing the steps of method 200 and used for illustration and explanation purposes only.
[0059] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0060] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a
computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0061] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0062] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory,
nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0063] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
We Claim:
1. A processor implemented method (200) for contour-based object detection, the method comprising:
receiving, by one or more hardware processors of a controller, an input image of an object, captured via a Capture Camera (CC), deployed at a location, wherein the CC captures a plurality of images at one of i) predefined time intervals, and ii) on receiving a trigger from Passive Infrared (PIR) sensors (202);
generating, by the one or more hardware processors, a line contour image and a thermalized color gradient contour image from the input image (204);
dynamically computing, by the one or more hardware processors, a disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capturing of the input image (206), wherein the one or more current environmental factors are obtained from a plurality of sensing devices deployed along with the CC and a plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates of the CC;
determining, by the one or more hardware processors, a pixel drag for the input image (208);
determining, by the one or more hardware processors, an actual Field of Object (FoO) from the input image (210);
determining, by the one or more hardware processors, an expected FoO based on the actual FoO, the disturbance index and the pixel drag (212);
dynamically computing, by the one or more hardware processors, a drag index in terms of summed differential blur of the actual FoO and the expected FoO, (214), wherein the summed differential blur is computed by representing the input image in a 3-dimensional pixel space by using a multi-dimensional matrix ;obtaining a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index (216); and
detecting, by the one or more hardware processors, the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image (218) by iteratively creating and comparing one by one successive line contours and color gradient contours, wherein pixel locations corresponding to the line-contours and the color-gradient contours are stored and used for detecting the object , and wherein the iterative process comprising:
a) detecting an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218a);
b) locating a plurality of trajectory start points positioned equidistantly on the object boundary of the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218b);
c) determining a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location of highest crest or trough on the reconstructed line contour image and the reconstructed thermalized color gradient contour image (218c);
d) creating a current line contour and a current color-gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories, (218d);
e) creating a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line
contour and the current color gradient contour are stored for object detection (218e);
f) slicing, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color gradient contour to generate a current color gradient-based sliced image, wherein the sliced area increases in each iteration as the contour creation approaches the central location (218f);
g) analyzing each of the current line contour-based sliced image and the current color gradient based sliced image by a Machine Learning (ML) model trained to predict an object type in the input image into one among a plurality of object types (218g);
h) determining a mismatch by superimposing pixels corresponding to the current line contour based sliced image and the current color gradient based sliced image with one or more pre-identified objects having gradients and curvatures with index corrections applied (218h);
i) computing an error coefficient of the ML model for the predicted object type based on an error, as a result of the deviation from among the pre-identified objects (218i);
j) detecting the object in the input image to be of the predicted object type by the ML model if the error coefficient reaches a minimal value (218j); and
k) repeating the iterative process by generating a second successive line contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value (218k).
2. The method as claimed in claim 1, further comprising validating the
predicted object type using a validation image captured by a validation
Camera (VC), the validating comprising:
a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index;
b) rotating the VC using a stepper motor in accordance with the reverse spin to a set a VC capture position;
c) capturing the validation image from the VC capture position; and
d) overlaying the detected object generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
3. The method as claimed in claim 1, wherein the method further comprises updating a central system with the computed drag index for the CC, wherein the drag index is reused for one or more CCs in a predefined radius of the CC defining a grid, if one or more controllers corresponding to the one or more CCs trigger a request for drag index computation within a predefined time span of the drag index computation, eliminating repeated drag index computation.
4. The method as claimed in claim 1, wherein the one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
5. A system (100) for contour-based object detection, the system (100) comprising:
a Capture Camera (CC), a Controller (102), a plurality of sensing devices, Passive Infrared (PIR) sensors, a validation Camera (VC) and a stepper motor, a power source, wherein the controller 102 comprises:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and
one or more hardware processors (104) coupled to the memory (102)
via the one or more I/O interfaces (106), wherein the one or more
hardware processors (104) are configured by the instructions to:
receive, an input image of an object in a motion, captured via the
CC, deployed at a location, wherein the CC captures a plurality of images
at one of i) predefined time intervals, and ii) on receiving a trigger from the
PIR sensors;
generate a line contour image and a thermalized color gradient contour image from the input image;
dynamically compute a disturbance index, introduced due to one or more current environmental factors at the location and motion of the object during capturing of the input image, wherein the one or more current environmental factors are obtained from the plurality of sensing devices deployed along with the CC and a plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates of the CC;
determine a pixel drag for the input image;
determine an actual Field of Object (FoO) from the input image; determine an expected FoO based on the actual FoO, the disturbance index and the pixel drag;
dynamically compute a drag index in terms of summed differential blur of the actual FoO and the expected FoO, wherein the summed differential blur is computed by representing the input image in a 3-
dimensional pixel space by using a multi-dimensional matrix ;obtaining a reconstructed line contour image and a reconstructed thermalized color gradient contour image using the drag index; and
detecting the object using an iterative process, performed simultaneously over the reconstructed line contour image and the thermalized color gradient contour image by iteratively creating and comparing one by one successive line contours and color gradient contours, wherein pixel locations corresponding to the line- contours and the color-gradient contours are stored and used for detecting the object , and wherein the iterative process comprising:
a) detecting an object boundary in the reconstructed line contour image and the reconstructed thermalized color gradient contour image;
b) locating a plurality of trajectory start points positioned equidistantly on the object boundary of the reconstructed line contour image and the reconstructed thermalized color gradient contour image;
c) determining a plurality of trajectories of contour convergence, wherein each of the plurality of trajectories of contour convergence initiates from each of the plurality of trajectory start points and converges towards a central location of highest crest or trough on the reconstructed line contour image and the reconstructed thermalized color gradient contour image;
d) creating a current line contour and a current color-gradient contour for the reconstructed line contour image and the reconstructed thermalized color gradient contour image, with contour creation initiating from the boundary towards the central location in steps following the plurality of trajectories;
e) creating a first successive line contour and a first successive color gradient contour for the reconstructed object image and the reconstructed thermalized image, wherein pixel locations corresponding to the current line contour and the current color gradient contour are stored for object detection;
f) slicing, in an iterative way, and streaming one by one, (i) an area between the current line contour and the first successive line contour to generate a current line contour based sliced image and (ii) an area between the current color gradient contour and the first successive color gradient contour to generate a current color gradient-based sliced image, wherein the sliced area increases in each iteration as the contour creation approaches the central location;
g) analyzing each of the current line contour-based sliced image and the current color gradient based sliced image by a Machine Learning (ML) model trained to predict an object type in the input image into one among a plurality of object types;
h) determining a mismatch by superimposing pixels corresponding to the current line contour based sliced image and the current color gradient based sliced image with one or more pre-identified objects having gradients and curvatures with index corrections applied;
i) computing an error coefficient of the ML model for the predicted object type based on an error, as a result of the deviation from among the pre-identified objects;
j) detecting the object in the input image to be of the predicted object type by the ML model if the error coefficient reaches a minimal value; and
k) repeating the iterative process by generating a second successive line contour and a second successive color gradient contour, if the error coefficient has not reached the minimal value.
6. The system as claimed in claim 5, further comprising validating the
predicted object type using a validation image captured by the validation
Camera (VC), the validating comprising:
a) calculating a reverse spin for the VC in accordance with the computed dynamic disturbance index and the computed dynamic drag index;
b) rotating the VC using a stepper motor in accordance with the reverse spin to a set a VC capture position;
c) capturing the validation image from the VC capture position; and
d) overlaying the detected object generated from the input image captured by the CC with the validation image to validate in accordance with the error coefficient.
7. The system as claimed in claim5, wherein the one or more hardware processors are further configured to update a central system with the computed drag index for the CC, wherein the drag index is reused for one or more CCs in a predefined radius of the CC defining a grid, if one or more controllers corresponding to the one or more CCs trigger a request for drag index computation within a predefined time span of the drag index computation, eliminating repeated drag index computation.
8. The system as claimed in claim 5, wherein the one or more current environmental factors obtained from the plurality of sensing devices deployed along with the CC and the plurality of sources providing current weather data with reference to global positioning system (GPS) coordinates
are reused for one or more CCs in the grid in case of failure of corresponding plurality of sensing devices of the one or more CCs.
| # | Name | Date |
|---|---|---|
| 1 | 202121049710-STATEMENT OF UNDERTAKING (FORM 3) [29-10-2021(online)].pdf | 2021-10-29 |
| 2 | 202121049710-REQUEST FOR EXAMINATION (FORM-18) [29-10-2021(online)].pdf | 2021-10-29 |
| 3 | 202121049710-FORM 18 [29-10-2021(online)].pdf | 2021-10-29 |
| 4 | 202121049710-FORM 1 [29-10-2021(online)].pdf | 2021-10-29 |
| 5 | 202121049710-FIGURE OF ABSTRACT [29-10-2021(online)].jpg | 2021-10-29 |
| 6 | 202121049710-DRAWINGS [29-10-2021(online)].pdf | 2021-10-29 |
| 7 | 202121049710-DECLARATION OF INVENTORSHIP (FORM 5) [29-10-2021(online)].pdf | 2021-10-29 |
| 8 | 202121049710-COMPLETE SPECIFICATION [29-10-2021(online)].pdf | 2021-10-29 |
| 9 | Abstract1.jpg | 2021-12-14 |
| 10 | 202121049710-Proof of Right [11-01-2022(online)].pdf | 2022-01-11 |
| 11 | 202121049710-FORM-26 [20-04-2022(online)].pdf | 2022-04-20 |
| 12 | 202121049710-FER.pdf | 2023-11-17 |
| 13 | 202121049710-OTHERS [05-04-2024(online)].pdf | 2024-04-05 |
| 14 | 202121049710-FER_SER_REPLY [05-04-2024(online)].pdf | 2024-04-05 |
| 15 | 202121049710-CLAIMS [05-04-2024(online)].pdf | 2024-04-05 |
| 16 | 202121049710-ABSTRACT [05-04-2024(online)].pdf | 2024-04-05 |
| 1 | 202121049710E_05-10-2023.pdf |