Sign In to Follow Application
View All Documents & Correspondence

"Irradiation Field Recognition"

Abstract: A method to extract irradiation field areas in an X ray image represented by a digital signal representation comprising the steps of segmenting the image in multiple regions of pixels which have similar local image characteristics fitting line segments to the boundaries of these regions whereby said line segments correspond with candidate irradiation field boundaries and constitute a segmentation map classifying regions in said segmentation map into at least two classes one class being irradiation field and the other class being collimated region on the basis of at least one of local regional and global image characteristics.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 September 2013
Publication Number
37/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-12-27
Renewal Date

Applicants

AGFA HEALTHCARE
IP Department 3802 Septestraat 27 B 2640 Mortsel

Inventors

1. BERTENS Tom
Agfa HealthCare NV IP Department 3802 Septestraat 27 B 2640 Mortsel

Specification

Irradiation field recognition
[DESCRIPTION]
FIELD OF THE INVENTION
The present invention relates to a method to extract
irradiation field area in an X-ray image represented by a digital
signal representation.
BACKGROUND OF THE INVENTION
In digital radiography X-ray opaque material is used to protect
subjects against unnecessary exposure to X-rays, to limit the
radiation scattering and to obtain multiple irradiation fields on a
recording medium such as a stimulable phosphor sheet. The region
outside the irradiation fields will have high luminance when the
image is displayed on a display device. The strong light will have a
negative impact on the efficiency and accuracy of the diagnosis.
Therefore automatic recognition of the irradiation field and
blackening of the region outside the delineated irradiation fields
is an important part of image processing of digital radiographic
images .
Prior art techniques are edge based. The digital greyscale
image is searched for pixels constituting the boundary between
irradiation fields and the shadow of the x-ray opaque material.
These pixels are grouped into candidate line segments to obtain a
correct delineation of the irradiation fields. The candidate line
segments are evaluated against a rule set of often local image
characteristics. A patent following this general idea for
irradiation field recognition is EP 0610605. Another patent based on
this idea is US 5901240.
These edge-based approaches may fail in cases where the edges
are hardly distinguishable in the image. This is the case for images
with high scatter radiation. In these images the transition between
irradiation field and the shadow of the x-ray opaque material is not
an edge but a gradual transition zone.
Another category of images that may cause failure of the edgebased
approach are low dose images with a very low contrast between
the irradiation field and the shadow of the x-ray opaque material.
An example is a lateral nose image in which the transition between
the dense bone of the skull and the shadow of the x-ray opaque
material has low contrast.
It is an aspect of the present invention to provide a method
for extracting the irradiation fields in an x-ray image that
overcomes the above disadvantages.
SUMMARY OF THE INVENTION
The above-mentioned aspects are realised by a method as set out
in claim 1 . Specific features for preferred embodiments of the
invention are set out in the dependent claims.
The present invention relates to a region-based method of
recognizing an irradiation field in digital x-ray image, regionbased
in the sense that the candidate irradiation field boundaries
are computed out of a segmented map of the image and not directly
out of the greyscale image .
The proposed method is a 3-step process. The first step is
obtaining an accurate segmentation of the image into multiple
regions .
The second step is fitting line segments to the region
boundaries whereby the line segments are candidate irradiation field
boundaries and constitute a new segmentation map.
The third step is identifying in the new segmentation map the
regions corresponding to irradiation fields using local and/or
regional and/or global image characteristics.
The method of the present invention is generally implemented in
the form of a computer program product adapted to carry out the
method steps of the present invention when run on a computer. The
computer program product is commonly stored in a computer readable
carrier medium such as a DVD. Alternatively the computer program
product takes the form of an electric signal and can be communicated
to a user through electronic communication.
Further advantages and embodiments of the present invention
will become apparent from the following description and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows the greyscale pixel profile of a transition zone
between irradiation field at the left side and collimation region at
the right side,
Figure 2 shows the gradient magnitude of the profile shown in Figure
1 ,
Figure 3 shows the gradient of the gradient magnitude shown in
Figure 2 ,
Figure 4 shows the computed measure of dissimilarity for the profile
shown in Figure 1 ,
Figure 5 is an example dataset,
Figure 6 shows the hierarchical clustering tree of the example
dataset shown in Figure 5 ,
Figure 7 shows the parameterized equation of a straight line,
Figure 8 is an illustration of a bounding box,
Figure 9 illustrates the case of multiple irradiation fields.
DETAILED DESCRIPTION OF THE INVENTION
The proposed method is a 3-step process. The first step is
obtaining an accurate segmentation of the image into multiple
regions .
The second step is fitting line segments to the region
boundaries whereby the line segments are candidate irradiation field
boundaries and constitute a new segmentation map.
The third step is identifying in the segmentation map the
regions corresponding to irradiation fields using local, regional
and global image characteristics.
Segmentation
In the described embodiment, the segmentation of the image into
multiple regions involves a multi -scale watershed technique and a
clustering technique.
The multi-scale watershed technique has some advantages over
the standard watershed technique.
The multi -scale watershed technique has a better edge focus
compared to the basic watershed technique in which blurring affects
the shape of the segmented objects.
The multi -scale watershed technique does not provide a complete
segmentation, but provides a coarse segmentation into regions of
varying sizes which can form building blocks for an advanced, more
specialized segmentation process.
A detailed description of the multi-scale watershed
segmentation can be found in "Front-End Vision and Multi-Scale Image
Analysis" by Bart M . ter Haar Romeny, ISBN 1-4020-1502-8.
The multi -scale watershed segmentation requires a measure of
dissimilarity in scale-space.
A simple definition of dissimilarity measure at a specific
scale is the gradient magnitude squared, computed using the 1st order
Gaussian derivatives.
Combining 1st order and 2nd order Gaussian derivatives results
is a better edge focusing for the images with relatively high
scatter radiation and thus a bad edge definition and relatively wide
transition zone between irradiation field and the shadow of the x -
ray opaque material, also referred to as collimation region.
The squared gradient magnitude L x+ L Y - S combined with a
clipped version of which is de derivative of the gradient in
the gradient direction. Local maxima and minima of J_,WW indicate the
largest gradient changes .
can be computed out of 1st and 2nd order Gaussian
derivatives as:
_
Of main interest are the local extrema closest to the shadow of
the X-ray opaque collimation material. As on average the shadow of
X-ray opaque material has higher greyscale pixel values compared to
the greyscale pixel values of the irradiation fields, only the local
minima must be preserved.
The measure of dissimilarity Diss is computed out of the
squared gradient magnitude and ^WW :
Figure 1 shows the greyscale pixel profile of a transition zone
between irradiation field at the left side and collimation region at
the right side. Figure 2 shows the gradient magnitude of the profile
shown in Figure 1 . Figure 3 shows the gradient of the gradient
magnitude shown in Figure 2 . Figure 4 shows the computed measure of
dissimilarity for the profile shown in Figure 1 . The measure of
similarity has a maximum corresponding with the largest gradient
change near the collimation region.
In this embodiment the multi- scale watershed segmentation is
preferably limited to rather fine scales as for coarser scales the
blurring effect can remove important low contrast edges.
The outcome of the segmentation by the multi -scale watershed
technique is a map of multiple small regions. These small regions
can be grouped or clustered to obtain an improved and more accurate
segmentation of the image.
Different clustering algorithms are available.
In this embodiment the hierarchical clustering technique is
used, more specifically agglomerative hierarchical clustering.
In agglomerative hierarchical clustering individual data points
are considered as clusters that are successively pairwise combined
into parent clusters.
The combination of the clusters is repeated until the whole
dataset is merged into 1 cluster. In this way a hierarchy or tree
structure is created by tracing which clusters are merged.
In each iteration the most similar pair of clusters is merged.
The measure of similarity or distance between a pair of
clusters is based on the median pixel value of the individual
regions k and the standard deviation of the pixel values within the
individual regions sk .
Several variants of hierarchical clustering exist which differ
in how the distance between a pair of clusters is defined in terms
of their members.
In this embodiment the complete -linkage clustering,
alternatively known as pairwise maximum- linkage clustering is used.
For this type of hierarchical clustering the distance between a pair
of clusters is defined as the maximum distance among the pairwise
distances between the members of both clusters.
As distance measure the Euclidean distance is used:
The dataset on which the hierarchical clustering is applied is
limited to regions with a minimum size, further referred to as large
regions, as the standard deviation becomes an unpredictable
measurement of variability for small populations.
Figure 6 shows the hierarchical clustering tree of the example
dataset shown in Figure 5 . For example the similarity between
cluster AB and cluster DE' will be the distance between member A
and member E as this is the largest distance among all the pairwise
distances between the members of both clusters.
A s the linking and merging of all the clusters into 1 cluster
is of no interest in this embodiment, stop criteria are defined to
stop the hierarchical clustering at an intermediate result. Possible
stop criteria for the hierarchical clustering are: the total number
of clusters is reduced to a predefined fraction of the original
number o f regions, or the smallest distance between a pair of
clusters exceeds a maximum distance, or the combination of the
aforementioned stop criteria.
In a last step the small individual regions, which are not
involved in the hierarchical clustering, are merged with the most
similar cluster of large regions.
The measure of similarity is based on the median pixel value of
the small region and the cluster of large regions mk and the position
of the small region and the cluster of large regions in the image
(posX , posYk). The position of a region is computed as the centre of
gravity i.e. the average position in X and Y direction of all the
pixels of that region. To compute the median pixel value and the
position of a cluster of large regions all the pixels of the members
of the cluster are taken into account .
The Euclidean distance is used as distance measure between a
small region s and a cluster of large regions L :
Fitting of line segments
A next step in the segmentation process is fitting line
segments to the boundaries of the segmented regions.
In this embodiment the Hough transform is used to extract straight
line segments.
For persons skilled in the art it is known to adapt the
technique to extract other shapes of irradiation fields such as
circles or ellipses.
The Hough transform is a feature extraction technique. The
purpose is to localize in an image imperfect instances of objects
with a specific shape by a voting procedure.
The shape of interest is specified by a parametric equation.
Votes are accumulated in parameter space referred to as Hough space.
The parameterized equation used for the extraction of straight
lines is:
r = cos + sin
with parameter r the perpendicular distance between the line
and the origin and parameter the direction perpendicular to the
line orientation (see figure 7 ) .
The Hough space is quantized for both parameters.
If a pixel is considered to be an edge pixel, for every
orientation of interest , the corresponding parameter r is computed
where (r ) specifies a line through the edge pixel. In the voting
step the values of the corresponding bins in the quantized parameter
space are increased for all the computed (r, ) pairs. This way an
accumulator space is generated. Significant straight lines in the
images can be found as local maxima in the Hough space.
In this embodiment the basic implementation of the Hough
transform is improved to extract more selectively the boundaries of
the irradiation fields.
In the voting step of the basic implementation of the Hough
transform, the accumulated weights are increased with a predefined
value, usually 1 .
A more selective voting method is used in this embodiment. The
strength of the vote is weighted for the gradient magnitude and
gradient orientation.
The gradient vector is projected to the different discrete
directions [0, Pi [ for which the Hough parameter space is computed.
The length of the gradient vector is first normalized and clipped to
a specified percentile of the gradient distribution in the image.
Gradient vectors with a magnitude above the percentile will have a
normalized length of 1 and gradient magnitudes below the percentile
are linearly rescaled between [0, 1 [. The clipping prohibits
excessive responses to strong edges in the image. The projection of
the gradient vector to a particular direction results in a more
selective voting and cancels out weight accumulation of opposite
gradients in the basic Hough transform (e.g. chessboard patterns) .
The computed Hough space is normalized such that a line segment
with maximum length for a particular orientation and position in the
image will have absolute value 1.0 in the Hough space. The values in
the normalized Hough space are thus independent of the aspect ratio
and dimensions of the image .
The normalization is applied by dividing the Hough space by a
2nd Hough space. This 2nd Hough space is computed under the assumption
that every pixel in the greyscale image is an edge pixel.
Although the above described improvements tackle some general
known drawbacks of the basic Hough transform, further corrections to
the computed Hough space may be implemented to overcome some issues
with respect to the detection of the boundaries of the irradiation
fields .
An irradiation field can be very small with respect to the
whole image area, e.g. DR images of paediatric fingers. An
irradiation field can also be very elongated and it can have an
aspect-ratio different from the aspect ratio of the image itself.
This results in very low accumulated values in the Hough space
for the boundaries of such small or elongated irradiation fields.
The above mentioned issues can be tackled by estimating a bestfit
bounding box of the irradiation fields in the image, i.e. the
oriented, smallest rectangle enclosing all the pixels belonging to
the irradiation fields present in the image.
The bounding box indicates roughly the maximum line segment
length and thus maximum accumulated weights that can be found for 2
perpendicular directions.
Such a bounding box can be computed by thresholding the
greyscale image. Thresholding results in a binary mask indicating
the pixels below a threshold, hence the darkest regions in the
image. Figure 8 shows such a binary mask and the best-fit bounding
box .
To estimate the orientation of the bounding box, the main
direction in the image is computed out of the Hough space. The main
direction is the direction with the most and longest line segments.
For every direction the cumulative sum of the Hough space is
computed. Only a predefined percentage of longest line segments for
orientation are taken into account. The direction with the highest
cumulative sum m is considered to be the main direction in the
image and defines the orientation of the bounding box.
To compute the dimensions of the bounding box, the Hough
transform of the binary mask is analysed along the main direction
ma and along its perpendicular direction 0 .
A s the binary mask gives a rough, noisy estimation of the
irradiation fields, the bounding box must enclose just a percentage
of the indicated irradiation field pixels.
The total sum of the Hough space of the binary mask is computed
along the profile of the main direction x . This profile is
analysed starting at the 2 outer points until the cumulative sum
exceeds a predefined threshold. The same analysis is applied to the
profile along the perpendicular direction.
In this way the size, orientation and position of the bounding
box is found and thus the maximum line segment length r ax for the
main direction and the maximum line segment length r ax 0 for the
perpendicular direction. This can be used to apply a multiplicative
correction to the normalized Hough space for the different angles
between [0, Pi [ :
CorrBB(9) = 1.0 / [rm x + (rmax90 - r x)* (0 .5-0 .5*cos (2 .0* ( - 0 x)))]
The irradiation field boundaries are short compared to the
total image length or width in case of multiple irradiation fields
in 1 image .
The number of irradiation fields can be computed by searching
the Hough space for 2 parallel or slightly tilted line segments with
opposite polarity. Such pairs indicate the small shadows of the x -
ray opaque material between irradiation fields. If such pairs of
line segments with opposite polarity are found, a correction can be
applied to the Hough space for the perpendicular direction.
If the image is composed of multiple irradiation fields, these
tend to be roughly aligned with the main image directions. In that
case, the search range for a pair of line segments with opposite
polarity is limited to a range of -10 to +10 degrees around the main
directions m and a o.
The number of the detected pairs of line segments and their position
within the bounding box determines the correction factors that are
applied .
The number of irradiation fields in a cross-section along the main
directions (N i r and -V rr ) equals the number of detected line pairs
along that direction plus 1 . The width or length of the individual
radiation fields is computed as the distance between the line pairs
and the distance between the line pairs and the bounding box
boundaries as shown in figure 9 .
The maximum distance is used to compute the multiplicative
correction factors for the both main directions:
CorrPart -
m ax dim :i [0,Nt [)
CorrPart
max(dim,. :i e [0, N,„. [)
A multiplicative correction is applied to the normalized Hough
space for the different angles between [0, Pi [ :
CorrPart{0) =CorrPart^ + Cor P rt - CorrPart )* 0.5 *(l - Cos{l *(0 - , a )))
After applying the above described corrections to the computed,
normalized Hough space, the boundaries of the irradiation fields
should have a theoretical Hough space value of ca. 1.0.
Out of the normalized, corrected Hough space the strongest
edges are selected as the local maxima with a value above a
predefined threshold.
To ensure that no important irradiation field boundaries are
missed, the set of Hough edges is checked against the position of
the bounding box computed for the correction of the Hough space. If
no Hough edge is detected in the neighbourhood of a side of the
bounding box, the strongest Hough edge is added with the appropriate
orientation and position, even if the corresponding Hough value is
below the predefined threshold.
Every local maximum in the Hough space corresponds with a line
that runs completely through the image. The line is divided into
separate line segments determined by the crossings with the other
preserved Hough edges. Only line segments are preserved with a
significant overlap with the region boundaries (the result of the
multi- scale watershed segmentation and hierarchical clustering) .
The preserved line segments divide the image in multiple regions
with straight boundaries. Labelling these individual regions defines
a new segmentation map referred to as Hough segmentation map.
Classification
The line segments comprised in the segmentation map define nonoverlapping
image regions, called Hough regions further on.
A binary classification is applied to the Hough segmentation
map to identify the regions that are part of the irradiation field
and the regions that are part of the shadow of the x-ray opaque
material .
For this classification local, regional and global image
characteristics are used.
Different classification methods are available.
In this embodiment a perceptron is used which is the simplest
kind o f feedforward neural network, in fact a linear classifier:
1 / , , - > <)
class(x) = ;=1
0 if lx, - 0 
with x the vector of feature values [x x2, ... , xp], w the
vector of weights and a bias term. More detailed information about
percetrons and adjusting the weight vector w can be found in "Neural
Networks: A Comprehensive Foundation" by Simon Haykin, ISBN 0-02-
352761-7 .
An important characteristic for distinguishing irradiation
fields from collimated, non-diagnostic regions is the local standard
deviation. The local standard deviation of non-diagnostic regions is
rather low compared to the local standard deviation of diagnostic
image regions. In general, there are more local pixel value
variations in a diagnostic region.
The distance between the histogram of local standard deviation
of the Hough regions and the global histogram of local standard
deviation of the whole image is a strong feature to be used in the
classification. As distance measure the Euclidean distance is used.
The Euclidean distance between 2 histograms is computed by summing
the squared differences between the bin counts of both histograms,
assuming both histograms have the same binning, and are normalized
to the same cumulated count :
dist(histA, hist )=
If most of the image area consists of diagnostic regions, the
non-diagnostic regions will have a large distance to the global
histogram of local standard deviation and the diagnostic regions
will have a small distance to the global histogram of local standard
deviation .
For images with a small diagnostic region with respect to the
total image area, the histograms of the non-diagnostic regions will
be very similar to the global histogram of local standard deviation
and thus will have a smaller distance compared to the distances of
the histograms of diagnostic regions.
Therefore there is need for a second distance measure. The
reference histogram is in this case computed as the histogram of
local standard deviation of only the bright regions in the image.
To compute this reference histogram only the local standard
deviation values are taken into account of the corresponding pixels
with a greyscale value above a predefined threshold.
Unlike the first Euclidian distance, this distance will be
small for non-diagnostic regions and large for diagnostic regions.
Another characteristic computed out of the histograms of local
standard deviation of the different regions is the cumulative sum of
the bin counts of the normalized histograms of the first N bins with
N the bin number for which the cumulated, normalized bin counts of
the global histogram of local standard deviation exceeds a
predefined value. As we stated that non-diagnostic regions have on
average a lower local standard deviation compared to the diagnostic
regions, this will result in higher cumulative sums for the non
diagnostic regions.
Another characteristic used is the amount of strong edges.
Diagnostic regions will contain more strong edges than non
diagnostic regions. A pixel is considered to be part of a strong
edge if the local standard deviation exceeds a predefined threshold.
The characteristic is expressed as the amount of pixels in a
region for which the local standard deviation exceeds the predefined
threshold, relative to the total number of pixels in the region.
Another strong feature to be used in the classification is the
difference in grey pixel values between a region and its surrounding
regions. The boundaries of the irradiation fields are characterized
by dark pixels at the inner side and brighter pixels at the outer
side. The average greyscale pixel value difference at the boundaries
of the Hough regions is an indicator to classify the regions as
irradiation field or collimation region.
The above described list of characterizing features can be
extended with other features as the median or average greyscale
pixel value of a Hough region, the positions within the image, shape
characteristics (e.g. aspect-ratio), the presence of burned pixels,
etc .
Using the above described features a feature vector is created
for each individual region in the Hough segmentation map. Using this
feature vector as input of the perceptron an accurate classification
is achieved for each region in the Hough segmentation map. The
result is a binary map delineating the irradiation fields in a
radiographic image .
The binary map can be converted to more compact
representations .
In an embodiment the representation is run-length encoded.
In another embodiment the representation is a set of polygons
extracted from the binary map.
Claims :
1 . A method to extract irradiation field areas in an X-ray image
represented by a digital signal representation comprising the
steps of:
a . Segmenting said image in multiple regions of pixels which
have similar local image characteristics,
b . Fitting line segments to the boundaries of said regions
whereby said line segments correspond with candidate
irradiation field boundaries and constitute a
segmentation map,
c . Classifying regions in said segmentation map into at
least two classes, one class being irradiation field and
the other class being collimated region on the basis of
at least one of local, regional and global image
characteristics .
2 . A method according to claim 1 wherein said segmentation of the
image in multiple regions is based on multi- scale watershed
segmentation .
3 . A method according to claim 1 wherein said segmentation of the
image in multiple regions is improved using image clustering to
merge regions which have similar local image characteristics.
4 . A method according to claim 3 wherein said clustering technique
is hierarchical clustering with the measure of similarity based
on at least one of the median or average greyscale pixel value
of a segmented region, the standard deviation of the greyscale
pixel values within a segmented region and the position of the
segmented region in the image .
5 . A method according to claim 1 wherein the Hough transform is
applied to the boundaries of the segmented regions to fit line
segments corresponding with candidate irradiation field
boundaries .
6 . A method according to claim 5 wherein the Hough transform is
normalized and corrected in a way that the Hough space values
of the boundaries of the irradiation fields in the image
approximate value 1.0.
7 . A method according to claim 5 wherein only line segments are
preserved that have a significant overlap with the boundaries
of the clustered regions.
8 . A method according to claim 1 wherein said regional
characteristics are computed out of the histograms of local
standard deviation of the different segmented regions.
9 . A method according to claim 8 wherein a characteristic computed
out of the histograms of local standard deviation is a distance
measurement between said histograms and a histogram of local
standard deviation of the total image.
10. A method according to claim 8 wherein a characteristic
computed out of the histograms of local standard deviation is a
distance measurement between said histograms and a reference
histogram of local standard deviation of only the brightest
regions in the image .
11. A method according to claim 8 wherein a characteristic
computed out of the normalized histograms of local standard
deviation is the cumulative sum of the histogram below a
specified histogram abscissa.
12 . A method according to claim 1 wherein said regional
characteristic is the amount of strong edges in the different
segment regions .
13 . A method according to claim 1 wherein said regional
characteristic is the average greyscale pixel difference
between a region of interest and its surroundings in the
neighbourhood of the boundaries of the said region of interest .
14 . A method according to claim 1 wherein the binary
classification is performed by using a perceptron.
15. A computer program product adapted to carry out the
method of any of the preceding claims when run on a computer.
16. A computer readable medium comprising computer executable
program code adapted to carry out the steps of any of claims 1-14.

Documents

Application Documents

# Name Date
1 7227-CHENP-2013 POWER OF ATTORNEY 06-09-2013.pdf 2013-09-06
1 7227-CHENP-2013-IntimationOfGrant27-12-2023.pdf 2023-12-27
2 7227-CHENP-2013 PCT PUBLICATION 06-09-2013.pdf 2013-09-06
2 7227-CHENP-2013-PatentCertificate27-12-2023.pdf 2023-12-27
3 7227-CHENP-2013-ABSTRACT [16-07-2019(online)].pdf 2019-07-16
3 7227-CHENP-2013 FORM-5 06-09-2013.pdf 2013-09-06
4 7227-CHENP-2013-CLAIMS [16-07-2019(online)].pdf 2019-07-16
4 7227-CHENP-2013 FORM-3 06-09-2013.pdf 2013-09-06
5 7227-CHENP-2013-DRAWING [16-07-2019(online)].pdf 2019-07-16
5 7227-CHENP-2013 FORM-2 FIRST PAGE 06-09-2013.pdf 2013-09-06
6 7227-CHENP-2013-FER_SER_REPLY [16-07-2019(online)].pdf 2019-07-16
6 7227-CHENP-2013 FORM-18 06-09-2013.pdf 2013-09-06
7 7227-CHENP-2013-OTHERS [16-07-2019(online)].pdf 2019-07-16
7 7227-CHENP-2013 FORM-1 06-09-2013.pdf 2013-09-06
8 7227-CHENP-2013-FORM 3 [12-07-2019(online)].pdf 2019-07-12
8 7227-CHENP-2013 DRAWINGS 06-09-2013.pdf 2013-09-06
9 7227-CHENP-2013 DESCRIPTION (COMPLETE) 06-09-2013.pdf 2013-09-06
9 7227-CHENP-2013-Certified Copy of Priority Document (MANDATORY) [02-04-2019(online)].pdf 2019-04-02
10 7227-CHENP-2013 CORRESPONDENCE OTHERS 06-09-2013.pdf 2013-09-06
10 7227-CHENP-2013-FER.pdf 2019-03-29
11 7227-CHENP-2013 CLAIMS SIGNATURE LAST PAGE 06-09-2013.pdf 2013-09-06
11 Correspondence by Agent_Assignment_19-02-2019.pdf 2019-02-19
12 7227-CHENP-2013 CLAIMS 06-09-2013.pdf 2013-09-06
12 7227-CHENP-2013-8(i)-Substitution-Change Of Applicant - Form 6 [12-02-2019(online)].pdf 2019-02-12
13 7227-CHENP-2013-ASSIGNMENT DOCUMENTS [12-02-2019(online)].pdf 2019-02-12
13 7227-CHENP-2013.pdf 2013-09-11
14 7227-CHENP-2013 FORM-3 04-03-2014.pdf 2014-03-04
14 7227-CHENP-2013-FORM-26 [12-02-2019(online)].pdf 2019-02-12
15 7227-CHENP-2013 CORRESPONDENCE OTHERS 04-03-2014.pdf 2014-03-04
15 7227-CHENP-2013-PA [12-02-2019(online)].pdf 2019-02-12
16 7227-CHENP-2013 CORRESPONDENCE OTHERS 06-08-2015.pdf 2015-08-06
16 7227-CHENP-2013 FORM-13 05-12-2014.pdf 2014-12-05
17 FORM 13.pdf 2014-12-16
17 7227-CHENP-2013 CORRESPONDENCE OTHERS 25-05-2015.pdf 2015-05-25
18 Annexure to GPA.pdf 2014-12-16
19 7227-CHENP-2013 CORRESPONDENCE OTHERS 25-05-2015.pdf 2015-05-25
19 FORM 13.pdf 2014-12-16
20 7227-CHENP-2013 CORRESPONDENCE OTHERS 06-08-2015.pdf 2015-08-06
20 7227-CHENP-2013 FORM-13 05-12-2014.pdf 2014-12-05
21 7227-CHENP-2013 CORRESPONDENCE OTHERS 04-03-2014.pdf 2014-03-04
21 7227-CHENP-2013-PA [12-02-2019(online)].pdf 2019-02-12
22 7227-CHENP-2013 FORM-3 04-03-2014.pdf 2014-03-04
22 7227-CHENP-2013-FORM-26 [12-02-2019(online)].pdf 2019-02-12
23 7227-CHENP-2013-ASSIGNMENT DOCUMENTS [12-02-2019(online)].pdf 2019-02-12
23 7227-CHENP-2013.pdf 2013-09-11
24 7227-CHENP-2013-8(i)-Substitution-Change Of Applicant - Form 6 [12-02-2019(online)].pdf 2019-02-12
24 7227-CHENP-2013 CLAIMS 06-09-2013.pdf 2013-09-06
25 7227-CHENP-2013 CLAIMS SIGNATURE LAST PAGE 06-09-2013.pdf 2013-09-06
25 Correspondence by Agent_Assignment_19-02-2019.pdf 2019-02-19
26 7227-CHENP-2013 CORRESPONDENCE OTHERS 06-09-2013.pdf 2013-09-06
26 7227-CHENP-2013-FER.pdf 2019-03-29
27 7227-CHENP-2013 DESCRIPTION (COMPLETE) 06-09-2013.pdf 2013-09-06
27 7227-CHENP-2013-Certified Copy of Priority Document (MANDATORY) [02-04-2019(online)].pdf 2019-04-02
28 7227-CHENP-2013 DRAWINGS 06-09-2013.pdf 2013-09-06
28 7227-CHENP-2013-FORM 3 [12-07-2019(online)].pdf 2019-07-12
29 7227-CHENP-2013 FORM-1 06-09-2013.pdf 2013-09-06
29 7227-CHENP-2013-OTHERS [16-07-2019(online)].pdf 2019-07-16
30 7227-CHENP-2013 FORM-18 06-09-2013.pdf 2013-09-06
30 7227-CHENP-2013-FER_SER_REPLY [16-07-2019(online)].pdf 2019-07-16
31 7227-CHENP-2013-DRAWING [16-07-2019(online)].pdf 2019-07-16
31 7227-CHENP-2013 FORM-2 FIRST PAGE 06-09-2013.pdf 2013-09-06
32 7227-CHENP-2013-CLAIMS [16-07-2019(online)].pdf 2019-07-16
32 7227-CHENP-2013 FORM-3 06-09-2013.pdf 2013-09-06
33 7227-CHENP-2013-ABSTRACT [16-07-2019(online)].pdf 2019-07-16
33 7227-CHENP-2013 FORM-5 06-09-2013.pdf 2013-09-06
34 7227-CHENP-2013-PatentCertificate27-12-2023.pdf 2023-12-27
34 7227-CHENP-2013 PCT PUBLICATION 06-09-2013.pdf 2013-09-06
35 7227-CHENP-2013-IntimationOfGrant27-12-2023.pdf 2023-12-27
35 7227-CHENP-2013 POWER OF ATTORNEY 06-09-2013.pdf 2013-09-06

Search Strategy

1 2019-03_26-03-2019.pdf

ERegister / Renewals

3rd: 15 Mar 2024

From 27/02/2014 - To 27/02/2015

4th: 15 Mar 2024

From 27/02/2015 - To 27/02/2016

5th: 15 Mar 2024

From 27/02/2016 - To 27/02/2017

6th: 15 Mar 2024

From 27/02/2017 - To 27/02/2018

7th: 15 Mar 2024

From 27/02/2018 - To 27/02/2019

8th: 15 Mar 2024

From 27/02/2019 - To 27/02/2020

9th: 15 Mar 2024

From 27/02/2020 - To 27/02/2021

10th: 15 Mar 2024

From 27/02/2021 - To 27/02/2022

11th: 15 Mar 2024

From 27/02/2022 - To 27/02/2023

12th: 15 Mar 2024

From 27/02/2023 - To 27/02/2024

13th: 15 Mar 2024

From 27/02/2024 - To 27/02/2025