Abstract: This invention provides a novel methodology to track these surgical instruments in the surgical video sequences and retrieve the frames where the particular instrument is being used. In the first phase the image of the instrument to be tracked is taken and extract its features. In the second phase the video is split into segments to find out the region of interest (ROI) to reduce region of search. Finally, the color and shape based features of video and input query image are matched to track and retrieve the required instrument. The system can recognizes the instrument in 91% cases but does not give any false alarm. This invention envisages a novel way of query based retrieval of video sequences from a video.
FORM-2
THE PATENT ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
PROVISIONAL SPECIFICATION
(See section 10 and Rule 13)
RETRIEVAL AND INDEXING OF VIDEO SEQUENCES OF A VIDEO
TATA CONSULTANCY SERVICES LIMITED
an Indian Company
of Bombay House, 24, Sir Homi Modi Street, Mumbai-400 001,
Maharashtra, India
THE FOLLOWING SPECIFICATION DESCRIBES THE INVENTION
Field of the Invention:
This invention relates to retrieval and indexing of video sequences of a video,
Background of the Invention:
The emergence of multimedia content within electronic publications raises a very critical issue - the issue to retrieve the object of interest from a huge chunk of image and video database. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. Potential online applications of endoscopic video understanding are indexing, retrieval and analysis of surgery. Despite widespread interest in content-based visual information retrieval, tools for retrieval of video specialized to the medical domain do not yet exist. In particular, there are no tools available to automatically annotate laparoscopic surgery videos. Content-based access of these videos would benefit large domains of teaching and research, facilitating training and surgical instruments manufacturing industry. Existing solutions for surgical instrument tracking involves modification of the instruments by some visual markers. Our approach is focused on detecting particular surgical instruments from a large chunk of video data, without any modification of the desired instrument, so that the use of the instrument can be studied and necessary upgradations can be done.
One can provide as a query a digital image and ask for all electronic documents that contain similar pictures e.g., the input image can be a
"medical scalpel" which is required to be searched from a Bariatric Surgery Sequence. In this case, the challenge is "understanding" the contents of the image. Most of the existing solutions available in the market for Image Retrieval are based on annotating each image with a set of keywords. Also, the retrieval in these cases is done mainly from an image database and not from videos. A major disadvantage in this applications are manual classification which is time-consuming and potentially error-prone Therefore, it is only a temporary workaround and hence is the need of Content Based Video Retrieval.
Despite widespread interest in content-based visual information retrieval, there is no tools since date for retrieval of video specialized to the medical domain. Several approaches have been made to track laparoscopic instruments during surgery. The track data could be used for modeling and recognizing surgical instruments used in laparoscopic surgeries. Some of the approaches used color based method and shape model to track the instruments. Some research groups used instruments marked with special colour for tracking. Zhang and Payandeh proposed a design marker on the instrument tips to recover robot controlled parameters. In most of these applications image recognition and analysis has been done based on color thresholding and shape modeling. These methods may not be successful enough due to reflections from the instruments and varying illumination. Krupa et al. solved a related problem by using a light emitting diode on the instrument tip to project laser dots on the surface whose pattern is analyzed to guide the endoscope. Instead of marking on the instruments, Climent and Mares proposed a method using Hough Transform based on the assumption that most of the instruments are structured objects. However most of the
attempts have been made for automatic guidance of robot so that it can perform as an assistant for laparoscopic surgeries.
The huge emergence of image processing and pattern recognition applications with the rapidly expanding image and video collections on the World Wide Web has attracted significant research efforts in effective retrieval and management of visual information. There is an old saying, "a picture is worth a thousand words". If each video sequence is considered a set of still images and the number of images in such a set might be in hundreds or thousands or even more, it's very difficult to find certain information in video documents. The sheer volume of video data available nowadays presents a good challenge in front of researchers - How can we organize the data and make use of all these video documents both effectively and efficiently? Research is very much needed to locate meaningful information and extract knowledge from video documents. It goes without saying that the more good representation scheme of video contents means fast and accurate retrieval of data. One of the good representation is to index the video data i.e. find the informative images from the indexed data. Apparently, it is totally impractical to index video documents manually due to the fact that it is too time consuming. However using various image processing techniques computer science researchers come up with the idea of shot detection which are used to group frames into shots. Thus, a shot designates a contiguous sequence of video frames recorded by an uninterrupted camera operation.
In the last a few years a huge progress has been noticed in the field of multimedia CODEC to enable the user to store more data in a less amount of
space. By exploiting this advancement, medical practitioners are also recording the video sequence of some critical operations. These sequences are highly interesting for the medical instrument manufacturers. They are usually keen to see how the doctors really use the instrument in real life operation theaters. But it is really difficult for them to find out the portion of video where the actual instrument is used. Browsing a digital video library can be very tedious especially with an ever expanding collection of multimedia material. Organizing the video and locating required information effectively and efficiently presents a great challenge to the researchers.
Summary of the Invention:
This invention provides a novel methodology to track these surgical instruments in the surgical video sequences and retrieve the frames where the particular instrument is being used.
In the first phase the image of the instrument to be tracked is taken and extract its features. In the second phase the video is split into segments to find out the region of interest (ROI) to reduce region of search. Finally, the color and shape based features of video and input query image are matched to track and retrieve the required instrument. The system can recognizes the instrument in 91% cases but does not give any false alarm.
This invention envisages a novel way of query based retrieval of video sequences from a video.
This invention further envisages a novel way of tracking and searching an object using Fuzzy C Means or Probabilistic tracking techniques like Bayesian Classifier, Condensation algorithms of video sequences from a video.
Still further, this invention envisages a method by which both automatic and semi automatic approach for query image based video retrieval is made available.
This invention also envisages segmentation of a video sequence into shots, identifying one or more key frames for each shot.
Again this invention envisages automatic retrieval of video images based on query image using or not using the manual intervention by the user by seeing the key frames for each shot.
Still particularly, this invention envisages automatic retrieval of video sequences, typically surgical video sequences based on the query image of a surgical instrument. It also envisages generation of key frames for each shot of the surgical video.
In accordance with this invention, there is provided a novel method and apparatus to retrieve and index a video sequence based on a query image.
The first step of the method in accordance with this invention involves organizing the video into a hierarchical structure. Intuitively, a shot captures the notion of a single semantic entity. A shot break is a transition from one
shot to the subsequent one, and may be of many types (for example, fade, dissolve, wipe and hard (or immediate)). Our interest lies in improving shot break detection by reducing the number of places erroneously declared as shot breaks (false positives) using block color histogram.
This invention also provides a tool which would break down the video into smaller and manageable units called shots. Also proposed in accordance with this invention is a shot detection system to index video data intelligently and automatically. Each frame of the video is formed using block and various features are extracted from each block. We compare every two consecutive images in video in block level and compute the distance between two motions compensated block. If there is a significant difference between two frames we called a transition occurred. Finally within a shot we generate some thumbnails (representative frames) within the shot to make search faster.
The invention therefore envisages a method of identifying the frames which contain the similar information included in the input query image.
Typically, the method involves using both automatic and semi automatic method for identifying any image from the image database or any video frame from a sequence if the input query comes in form of an image.
The method envisages splitting the video sequence into smaller parts that are taken in a single camera and contains similar information. These parts will be referred to as shot in the next part of this document.
The method therefore uses multifactorial approach to determine the shot boundary which makes a decision to identify the shot boundary from multiple parameters.
The above mentioned parameter list contains
• Formation of one dimensional quantized Block feature in HSV domain
• Bhattacharya's distance Dist(P,Q) = or any other
probabilistic distance
• Laplacian of guassian or guassian followed by edge in intensity
domain
• Normalize the edge strength
• Compute the homogeneity factor in terms of edge as following
• Normalise the above histogram
• Calculate the runtime mean and variance of histogram difference
• Declare the shot based on number of Block different or depending on runtime mean and standard deviation using di³ m + p*s where di is the histogram distance
The method further involves using
• Calculating the Shannon entropy of every frame within shot
• Detemining the peak of the set of entropy values within shot
to identify some representative frames those containing the most information within a shot. These representative frames are known as Key frames. Figure 7 shows a typical histogram difference plot for a video sequence having dissolve with good distinguishing frames among shot, and Fig 14 shows a histogram difference plot for a video sequence having very smooth dissolve with more or less same kind of global information among shots.
The invention therefore envisages a method of automatically identifying the frames which contain the similar information included in the input query image by matching the template and or the feature of input image and the input video sequence.
Typically, the method involves matching the following features:
• Color Histogram of query image
• Linear Shape (from observation)
• Edge Gradient of input video
• Compactness of query image.
• Compactness of candidate object in video
The method envisages splitting any video frame into smaller parts having the similar features. Features are also extracted from the input query image.
The method therefore uses pattern matching and feature matching approach to identify the frames boundary which makes a decision to identify the shot boundary from multiple parameters.
In accordance with a preferred embodiment of the invention, the method involves combining any of the aforesaid steps with one another to arrive at a conclusion for the overall measure of goodness.
The method of this invention can specifically be used for retrieval in the context of any type of compressed or un-compressed video or image.
Further, the method of video or image retrieval using a similar methodology specified above can be used for any video or image retrieval scheme applied in the context of any compressed or uncompressed video.
The invention will now be described with reference to the accompanying drawings.
Brief Description of the Accompanying Drawings:
Figure 1 shows: Flowchart of the overall process of automatic retrieval of
video;
Figure 2 shows: Detection of lower and upper threshold from histogram of
query image;
Figure 3 shows: First level block diagram describing the overview of the
system;
Figure 4 shows: Block diagram of Offline Video Annotation;
Figure 5 shows: Block Diagram of Query Image Based Online Video
Retrieval;
Figure 6 shows: Flowchart of the online data retrieval
Figure 7 shows: Homogeneity histogram difference plot
Figure 8 shows: Homogeneity histogram difference plot for very smooth
dissolving effect
Detailed Description of Invention:
The method in accordance with this invention has been mainly developed and tested against different medical surgical videos. Medical operations are usually recorded using three cameras. One of the cameras is placed on top of the operation table, one is held by a person by hand used to record the laproscopic view displayed in a screen and the third one is placed on a tripod.
From this set of images of the medical instruments and the available surgical videos, we can make the following observations:
• All the instruments have a linear pattern
• All the instruments have two distinct linear edges
• More than 20% of the instrument is visible during operation
• Instruments visible in laparoscopic view are monochromatic in nature. This leads to a single peak in the histogram of y, u and v components of the query image
Now we shall list up the features that have been used to match the query image with the video:
• Colour Histogram of query image
• Linear Shape (from observation)
• Edge Gradient of input video
• Compactness of query image.
• Compactness of candidate object in video
The entire process is described in Figure 1. The overview of the overall system is described by a block diagram in Figure 3. Figure 4 described the block diagram of offline video annotation and Figure 5 described the block diagram of query based online video retrieval. The entire process of online video retrieval is described in Figure 6. The brief description of each module is provided below.
Color based feature Extraction from Query Image: In this step, 3 separate histogram for y, u and v are plotted separately. Based of the plotted histogram we find the points with non-zero frequency in the neighborhood of the peak. We denote them as lower threshold (It) and upper threshold (ut) as shown in Figure 2.
Determination of compactness threshold from query image: For compactness measurement the circularity parameter is selected as a feature as it is invariant in nature under scaling and translation. Extract the boundary of the query image (b) and count the number of pixels containing the information about the actual instrument (c) to search. As the video may contain the instrument in any orientation, the compactness parameter is used as a feature in this application. The threshold value (tc) is determined for
compactness as
Segmentation of input video Based on the It and ut: The input video stream comes in compressed H.264 format. Initially that compressed video is
decoded into YUV 4:2:0 format (PiJ). Then each pixel of the frame of the video is labeled as 0 or 1 depending upon the following criteria and thus
obtain a binarized video frame "IJ
Where i and j represent the row and column position respectively.
Edge gradient based Noise Removal using K-means algorithm: In the next
phase the edges are obtained from the intensity video frame (pj)using Sobel
operator. Sobel gives the edge gradients stored in an array(pe). Now among
these points, some lead to sharp edges and some represent weak edges. In
order to reduce the edges coming due to noise in the input video or poor
lighting condition during recording of the video, we discard the weak edges.
K-means algorithm is used to obtain the threshold. Pseudo code of this
module is as follows:
Apply K-means algorithm with k = 2 on (pe) which will in turn classify each
pixel (pe). into either of the two clusters CI and C2
Re assign the values of (pe). as:
where Cn1 and cn2 are the centroids of CI and C2 respectively.
Removal of undesirable components using Component Labeling: In the next
step the following operations are performed on p to reduce the search complexity.
Apply connected component analysis to label each pixel of the binarized
/ //
frame " which gives p
For each component (comp) find the pixel count ( comp ).
Compute the bounding box ( comp ) for each component.
Find the component density ( comp ) which gives the number of pixels per unit area inside the bounding box for that particular component.
From observation it is found that the object has a high compactness thus it gives a high component density.
Now we filter the labeled p array based on c"mp and
the /c.
Detection of Object of Interest based on linear shape feature: The last step
involves detection of the object of interest based on the linear shape of query
image as described in observation section. To get the degree of linearity of
object boundary, Hough transform has been applied in the boundary of each
component.
Let Ni be the number of point in the boundary of ith component and Hi be the
number of boundary points belonging to the line produced by the Hough
transform. We compute
Where n is the total number of components in pH
As all the instruments are typically linearly shaped, the Rt should be ideally
1. Some tolerance is imposed for noise. A threshold T is taken so that if,
In this experiment T = 0.5, is taken i.e. if the Hi is less than 50% of JV,
then the component is noise.
Shot can be categorized in two ways. One is hard cut which is a drastic
change between two frames and another is a soft cut which includes fade-in,
fade-out, dissolves etc. In this approach a hierarchical system is designed to
detect the type of cut and also locate the position of cut in one pass.
An 8x8 block is taken of the every color frame. For every block we convert
pixels from RGB to HSV color space to have separate information of color,
intensity and impurities. We construct a histogram for each block. As the
color, intensity and impurities ranges are very big we have used the
quantization scheme described in [].
The quantization scheme can also be summarized as
following:
Let h, s, and v be the HSV domain color value, with s and v is normalized
between [0,1], for a particular R, G, and B value and index is the quantized
bin index.
Now, a pure black area can be found in
v Î [0,0.2]
index = 0
Gray area can be found in ,e[0,0.2]and
ve [0.2,0.8]
index e[((v 0.2)x10)J+l
a white area can be found in ,e[0,0.2]and
ve [0.8,1.0]
index = 7
The color area is found in
se [0.2,1.0]and
v e [0.2,1.0]5 f()r different h vaiues
The quantization of h ,s and v value for different s and v is done as following
Let H index ,S index and V index be the quantized index for different h, s and v value.
Finally the histogram bin index is given by the following equation
In the above way we find the quantized color histogram of each 8x8 blocks. In video there will be a motion in consecutive frames and so we cannot compare a current frame block with the previous frame block in same position. For that, we search the correspondence of a present frame block with the previous frame block in the block 8-neighbor as shown in figure below
There are many distance functions available in literature. For distance between two probability distribution can be computed using Bhattacharya's distance function. The distance metric we have used is,
Dist(P,Q)=
where P and Q are probability distribution functions. The proof of being metric of equation (1) can be found in ()
If the distance between the two histogram is less than T then we say that the blocks are identical, otherwise they are different.
We compute the above distance for every block of the image with the previous frame. For a hard cut the block difference image should have many blocks having different from previous frame and should construct some object of considerable area.
We use the connected component algorithm to find all the connected component in the difference image .
Let Ai be the area of the ith component of the difference image. We apply
area thresholding to remove the isolated different block.
We remove the object having area less then one macro block area.
Ai = 0 if Ai < 64, i.e only a single block is different not a single neighbour is different. We declare a hard cut if
i.e. if a considerable amount of connected macro-block differs in two
consecutive frames we declare a hard cut.
For soft cut the above methods finds difficult as there is no major difference
between blocks in previous and current frame. By soft cut we mainly
concentrate in fade-in, fade-out, and dissolve.
During investigation we notice that during soft-cut two important
characteristic of image differs in previous and current frame.
Edge strength
Gray value
We combine these two features to capture the soft cut.
To find the edge strength image should be de-noised as noise can incorporate erroneous result LOG(laplacian of Guassian) filter can be applied to detect edges in image. Equation described a LOG filter.
An alternative way to use the LOG filter is to apply a Gaussian mask over
the image before applying the edge detection operator . This pre-processing
step reduces the high frequency noise components prior to the differentiation
step.
The above procedure gives us the edge image I of the gray frame F.
Now we normalize the I image to I' with the maximum edge value. So the
normalize edge strength V is
Where
V = Max (I(i,j)),i = 0,1,2... M
j= 0,1,2...N
We compute the homogeneity factor at every I'(I ,J)-
Let H be the homogeneity image off, then
H(i,J) = 1 - I'(i,j) i = 0,1,2...M 2
j= 0,1,2.. .N
One thing to be notice in equation 2 is if I'(i,J) is large i.e roughness is large homogeneity is low. As our concentration lies in roughness of the image we neglect the homogenous region, i.e.
In our experiment we have taken T = 0.8, i.e roughness less than 80% is
removed.
The associativity of normalize edge strength and grayness of frame F is
defined by a histogram Hist as follows:
Then the histogram is normalized in [0,1] using equation 2. The normalize histogram Nhist(i) is
These normalize histograms of previous frame and current frames are calculated and Bhattacharya's distance function as in equation 1 is used to measure the difference between the two frames.
As soft-cut occurs very slowly we need to follow-up the difference in every frame. In every frame the running average m and the standard deviation a
are calculated.
Where di is the distance in ith frame. A soft cut occurs if
di ³ m + p * s
In our experiment p = 2.5
Once the soft cut occurs we re-initialize the system and put
m = 0 and s = 0 and stores the difference value at softcut in mdiff, i,e,
mdiff= di
Again the m and s calculation starts freshly after getting soft cut but again calculation restarted once the mean value We put this constraint, because after soft-cut the system usually remain unstable, and there is a fluctuation in difference values. To avoid that when system is going to stable i.e. the difference between frames is coming down, then we compute again the parameters. Figure 7 and Figure 8 described the difference (^) values in different frames.
While considerable emphasis has been placed herein on the particular features of the preferred embodiment and the improvisation with regards to it, it will be appreciated the various modifications can be made in the
preferred embodiments without departing from the principles of the invention. These and the other modifications in the nature of the invention will be apparent to those skilled in art from disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to interpreted merely as illustrative of the invention and not as a limitation.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 399-MUM-2008-FORM 1(30-12-2008).pdf | 2008-12-30 |
| 1 | 399-MUM-2008-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 2 | 399-MUM-2008-CORRESPONDENCE(30-12-2008).pdf | 2008-12-30 |
| 2 | 399-MUM-2008-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 3 | 399-MUM-2008_EXAMREPORT.pdf | 2018-08-10 |
| 3 | 399-MUM-2008-RELEVANT DOCUMENTS [30-09-2021(online)].pdf | 2021-09-30 |
| 4 | 399-MUM-2008-SPECIFICATION(AMENDED)-061115.pdf | 2018-08-10 |
| 4 | 399-MUM-2008-IntimationOfGrant12-02-2020.pdf | 2020-02-12 |
| 5 | 399-MUM-2008-REPLY TO EXAMINATION REPORT(11-5-2015).pdf | 2018-08-10 |
| 5 | 399-MUM-2008-PatentCertificate12-02-2020.pdf | 2020-02-12 |
| 6 | 399-MUM-2008-Written submissions and relevant documents (MANDATORY) [04-10-2019(online)].pdf | 2019-10-04 |
| 6 | 399-MUM-2008-Power of Attorney-061115.pdf | 2018-08-10 |
| 7 | 399-MUM-2008-PETITION UNDER RULE-137(17-4-2014).pdf | 2018-08-10 |
| 7 | 399-MUM-2008-HearingNoticeLetter20-09-2019.pdf | 2019-09-20 |
| 8 | 399-MUM-2008-PETITION UNDER RULE 137-061115.pdf | 2018-08-10 |
| 8 | 399-MUM-2008-ORIGINAL UR 6(1A) FORM 26-170919.pdf | 2019-09-20 |
| 9 | 399-MUM-2008-FORM-26 [16-09-2019(online)].pdf | 2019-09-16 |
| 9 | 399-MUM-2008-MARKED COPY-061115.pdf | 2018-08-10 |
| 10 | 399-MUM-2008-ABSTRACT(24-2-2009).pdf | 2018-08-10 |
| 10 | 399-mum-2008-form-3.pdf | 2018-08-10 |
| 11 | 399-MUM-2008-Abstract-061115.pdf | 2018-08-10 |
| 11 | 399-mum-2008-form-26.pdf | 2018-08-10 |
| 12 | 399-MUM-2008-ANNEXURE TO FORM 3(11-5-2015).pdf | 2018-08-10 |
| 12 | 399-mum-2008-form-2.pdf | 2018-08-10 |
| 13 | 399-MUM-2008-CLAIMS(24-2-2009).pdf | 2018-08-10 |
| 14 | 399-MUM-2008-Claims-061115.pdf | 2018-08-10 |
| 15 | 399-MUM-2008-CORRESPONDENCE(12-2-2010).pdf | 2018-08-10 |
| 15 | 399-mum-2008-form-1.pdf | 2018-08-10 |
| 16 | 399-MUM-2008-CORRESPONDENCE(17-4-2014).pdf | 2018-08-10 |
| 16 | 399-MUM-2008-FORM PCT-ISA-237(11-5-2015).pdf | 2018-08-10 |
| 17 | 399-MUM-2008-CORRESPONDENCE(24-2-2009).pdf | 2018-08-10 |
| 17 | 399-MUM-2008-FORM PCT-IB-373(11-5-2015).pdf | 2018-08-10 |
| 18 | 399-MUM-2008-FORM 5(24-2-2009).pdf | 2018-08-10 |
| 18 | 399-mum-2008-correspondence-received.pdf | 2018-08-10 |
| 19 | 399-mum-2008-description (provisional).pdf | 2018-08-10 |
| 19 | 399-MUM-2008-FORM 3(17-4-2014).pdf | 2018-08-10 |
| 20 | 399-MUM-2008-DESCRIPTION(COMPLETE)-(24-2-2009).pdf | 2018-08-10 |
| 20 | 399-MUM-2008-Form 2(Title Page)-061115.pdf | 2018-08-10 |
| 21 | 399-MUM-2008-DRAWING(24-2-2009).pdf | 2018-08-10 |
| 21 | 399-MUM-2008-FORM 2(TITLE PAGE)-(PROVISIONAL)-(27-2-2008).pdf | 2018-08-10 |
| 22 | 399-MUM-2008-Drawing-061115.pdf | 2018-08-10 |
| 22 | 399-MUM-2008-FORM 2(TITLE PAGE)-(24-2-2009).pdf | 2018-08-10 |
| 23 | 399-mum-2008-drawings.pdf | 2018-08-10 |
| 23 | 399-mum-2008-form 2(24-2-2009).pdf | 2018-08-10 |
| 24 | 399-MUM-2008-Examination Report Reply Recieved-061115.pdf | 2018-08-10 |
| 24 | 399-MUM-2008-FORM 18(12-2-2010).pdf | 2018-08-10 |
| 25 | 399-mum-2008-form 13(24-2-2009).pdf | 2018-08-10 |
| 26 | 399-MUM-2008-FORM 18(12-2-2010).pdf | 2018-08-10 |
| 26 | 399-MUM-2008-Examination Report Reply Recieved-061115.pdf | 2018-08-10 |
| 27 | 399-mum-2008-drawings.pdf | 2018-08-10 |
| 27 | 399-mum-2008-form 2(24-2-2009).pdf | 2018-08-10 |
| 28 | 399-MUM-2008-Drawing-061115.pdf | 2018-08-10 |
| 28 | 399-MUM-2008-FORM 2(TITLE PAGE)-(24-2-2009).pdf | 2018-08-10 |
| 29 | 399-MUM-2008-DRAWING(24-2-2009).pdf | 2018-08-10 |
| 29 | 399-MUM-2008-FORM 2(TITLE PAGE)-(PROVISIONAL)-(27-2-2008).pdf | 2018-08-10 |
| 30 | 399-MUM-2008-DESCRIPTION(COMPLETE)-(24-2-2009).pdf | 2018-08-10 |
| 30 | 399-MUM-2008-Form 2(Title Page)-061115.pdf | 2018-08-10 |
| 31 | 399-mum-2008-description (provisional).pdf | 2018-08-10 |
| 31 | 399-MUM-2008-FORM 3(17-4-2014).pdf | 2018-08-10 |
| 32 | 399-mum-2008-correspondence-received.pdf | 2018-08-10 |
| 32 | 399-MUM-2008-FORM 5(24-2-2009).pdf | 2018-08-10 |
| 33 | 399-MUM-2008-CORRESPONDENCE(24-2-2009).pdf | 2018-08-10 |
| 33 | 399-MUM-2008-FORM PCT-IB-373(11-5-2015).pdf | 2018-08-10 |
| 34 | 399-MUM-2008-CORRESPONDENCE(17-4-2014).pdf | 2018-08-10 |
| 34 | 399-MUM-2008-FORM PCT-ISA-237(11-5-2015).pdf | 2018-08-10 |
| 35 | 399-MUM-2008-CORRESPONDENCE(12-2-2010).pdf | 2018-08-10 |
| 35 | 399-mum-2008-form-1.pdf | 2018-08-10 |
| 36 | 399-MUM-2008-Claims-061115.pdf | 2018-08-10 |
| 37 | 399-MUM-2008-CLAIMS(24-2-2009).pdf | 2018-08-10 |
| 38 | 399-mum-2008-form-2.pdf | 2018-08-10 |
| 38 | 399-MUM-2008-ANNEXURE TO FORM 3(11-5-2015).pdf | 2018-08-10 |
| 39 | 399-MUM-2008-Abstract-061115.pdf | 2018-08-10 |
| 39 | 399-mum-2008-form-26.pdf | 2018-08-10 |
| 40 | 399-MUM-2008-ABSTRACT(24-2-2009).pdf | 2018-08-10 |
| 40 | 399-mum-2008-form-3.pdf | 2018-08-10 |
| 41 | 399-MUM-2008-FORM-26 [16-09-2019(online)].pdf | 2019-09-16 |
| 41 | 399-MUM-2008-MARKED COPY-061115.pdf | 2018-08-10 |
| 42 | 399-MUM-2008-ORIGINAL UR 6(1A) FORM 26-170919.pdf | 2019-09-20 |
| 42 | 399-MUM-2008-PETITION UNDER RULE 137-061115.pdf | 2018-08-10 |
| 43 | 399-MUM-2008-HearingNoticeLetter20-09-2019.pdf | 2019-09-20 |
| 43 | 399-MUM-2008-PETITION UNDER RULE-137(17-4-2014).pdf | 2018-08-10 |
| 44 | 399-MUM-2008-Written submissions and relevant documents (MANDATORY) [04-10-2019(online)].pdf | 2019-10-04 |
| 44 | 399-MUM-2008-Power of Attorney-061115.pdf | 2018-08-10 |
| 45 | 399-MUM-2008-REPLY TO EXAMINATION REPORT(11-5-2015).pdf | 2018-08-10 |
| 45 | 399-MUM-2008-PatentCertificate12-02-2020.pdf | 2020-02-12 |
| 46 | 399-MUM-2008-SPECIFICATION(AMENDED)-061115.pdf | 2018-08-10 |
| 46 | 399-MUM-2008-IntimationOfGrant12-02-2020.pdf | 2020-02-12 |
| 47 | 399-MUM-2008_EXAMREPORT.pdf | 2018-08-10 |
| 47 | 399-MUM-2008-RELEVANT DOCUMENTS [30-09-2021(online)].pdf | 2021-09-30 |
| 48 | 399-MUM-2008-RELEVANT DOCUMENTS [26-09-2022(online)].pdf | 2022-09-26 |
| 48 | 399-MUM-2008-CORRESPONDENCE(30-12-2008).pdf | 2008-12-30 |
| 49 | 399-MUM-2008-FORM 1(30-12-2008).pdf | 2008-12-30 |
| 49 | 399-MUM-2008-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |