Abstract: The present invention provides a system and a method for detecting a raised print on a document surface using structured light. In one embodiment, this is accomplished by imaging reflected light from the illuminated document, to obtain an image, transforming the obtained image into an upper edge component and a lower edge component, and comparing the difference between the upper edge component and the lower edge component with a predefined value to detect the raised print. The predefined value is a self-learned reference value generated on the fly corresponding to a particular height or a range of heights. Therefore, the system also determines the height of the raised print on the document surface without actually measuring the height.
A SYSTEM AND METHOD FOR DETECTING RAISED PRINT
FIELD OF THE INVENTION
[0001] The present invention generally relates to detecting raised print and more particularly to a system and method for detecting intaglio print.
PRIOR ART
[0002] It is often required to check the authenticity of security documents such as a banknote. Usually, the genuine documents are printed with printing methods which are usually irreplicable e.g. Intaglio printing method. The documents printed with intaglio printing technique are provided with regions of raised material (raised print characterized by a print height). The authenticity of these document, therefore, may be checked by detecting the presence of the raised material (raised intaglio print) on the document. Also of concern for security printers is to produce printed notes that have raised intaglio print, to have a predetermined controlled tactility.
[0003] Various methods are employed in art to detect the presence of the raised print on the document. One example of such method and apparatus, which detect the raised print, is provided below herein.
[0004] US Patent Publication No. 2009/0310126 assigned to De La Rue International Ltd., discloses a method and apparatus for raised material detection. A document surface is illuminated with radiation beam such that any raised material on the document surface reflects this radiation beam. The illuminated surface is imaged using a radiation detector. The illuminating step causes a reflection and/ or shadow to be generated from at least one edge of the raised material and the reflection and/ or shadow is analyzed to detect the presence of the raised material. The method is claimed to distinguish between the raised document surface and surface defects.
[0005] Although the above-mentioned method and apparatus determine the presence/ absence of the raised print and distinguishes between the raised document surface and surface defects, however it is difficult to differentiate between different print thicknesses, i.e., raised print of different heights. The print thickness referred in this specification is the height of the incised ink deposited on the printed surface during intaglio printing. The ink on the print tends to be slightly raised above the surface of the paper. Further, the reliability of authenticity checking would be more if one can distinguish between the thicknesses of the raised print.
[0006] It would be desirable therefore to provide an improved method and system for reliable detection of the raised print.
SUMMARY OF THE INVENTION
[0007] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
[0008]According to one aspect of the invention there is provided a method for detecting a raised print on a document, the method comprises, illuminating the surface of a security document with structured light, capturing at a predetermined triangulation angle the reflected structured light pattern formed on the security document to obtain an image of the same. Transforming the obtained image of the reflected structured light pattern into an upper edge component and a lower edge component, and comparing a difference between the upper edge component and the lower edge component with a predefined value to detect the raised print.
[0009] According to another aspect of the invention the predefined value is a self-learned range of value generated on the fly by determining the difference between the upper edge component and the lower edge component for a plurality of test documents and storing a range of calculated differences as the predefined value. [0010] In another aspect, the invention provides a system for detecting a raised print on a document, the system comprises of an imaging unit for obtaining image of the document and analyzing unit for transforming the image and detecting the presence of the raised. The analyzing unit comprises of an image processing engine configured to transform the image into an upper edge component and a lower edge component and a comparator module to compare a difference between the upper edge component and the lower edge component with a predefined value to detect the raised print.
DETAILED DESCRIPTION OF THE DRAWINGS
[0011] So that the manner in which the above recited features of the present invention
can be understood in detail, a more particular description of the invention, briefly
summarized above, may be had, by reference to various embodiments, some of which
are illustrated in the appended drawings. It is to be noted, however, that the appended
drawings illustrate only typical embodiments of this invention and are therefore not to
be considered limiting to its scope, for the invention may admit other equally effective
embodiments.
[0012] FIG. 1 is a schematic diagram of a system for detecting a raised print on a
document surface according to an embodiment of the invention.
[0013] FIG. 2 illustrates a document without raised print being inspected by the
system of FIG. 1
[0014] FIG. 2A is a snapshot of a laser line profile of the document of FIG. 2
according to an example of the invention.
[0015] FIG. 3 illustrates a document having raised print being inspected by the system
of FIG. 1
[0016] FIG. 3A is a snapshot of a reflected laser line profile formed on the document
of FIG. 3 according to an example of the invention.
[0017] FIG. 4 is a snapshot of digital monochrome image depicting the reflected laser
line profile of FIG. 3A
[0018] FIG. 5 shows the bright region i.e. the laser line segmented from the digital
gray scale image of FIG. 4.
[0019] FIG. 6 is a schematic diagram of a system for detecting a raised print on a
document by imaging both surface of the document according to an embodiment of the
invention.
[0020] FIG. 7 is a flowchart illustrating the steps to calculate the predetermined value
of transformation parameter.
[0021] Fig 8 is a flowchart illustrating the steps to detect raised print of required
tactility on the printed document according to an embodiment of the invention.
DETAILED DESCRIPTION OF INVENTION
[0022] In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration, specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
[0023] The leading digit(s) of reference numbers appearing in the Figures corresponds to the Figure number in which that component is first introduced, such that the same reference number is used throughout to refer to an identical component which appears in multiple Figures. The same reference number or label may refer to signals and connections, and the actual meaning will be clear from its use in the context of the description.
[0024] The features, structures, or characteristics of the invention described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, reference throughout this specification to "certain embodiments," "some embodiments," or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in certain embodiments," "in demonstrative embodiments," "in some embodiment," "in other embodiments," or similar language throughout this specification do not necessarily all refer to the same group of embodiments and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0025] Various embodiments of the invention provide a system and method for detecting a raised print on a document. More specifically, embodiments of the invention provide a method for detecting a raised print on a document surface by comparing a data obtained from the document, to be detected, with a predefined data. Further, the predefined data is generated on the fly, using a plurality of test documents, prior to detecting the raised print on the document.
[0026] FIG. 1 shows the overview of a system 100 for detecting a raised print on a document 140 according to an embodiment of the invention. The system 100 includes
an imaging unit having a structured light source 110 capable of illuminating the document 140 and a detector unit (light detector 120) for acquiring an image of the document 140. The system 100 further includes a computing device 130communicatively coupled to the light detector 120 and configured to process the acquired image. The computing device 130 includes an analyzing unit for processing, e.g., segmenting/ filtering the image into an upper and a lower edge.
[0027] The document 140 is held and transported by a sheet belt 150. The document 140 is illuminated by the structured light source 110. preferably placed perpendicular to the surface of the document 140. The structured light is reflected from the surface of the illuminated document 140. The reflected structured light pattern formed on the document is imaged by the light detector 120 or any image acquisition device. The light detector 120 is placed at a triangulation angle a relative to the structured light source 110 in anticlockwise direction. In one embodiment, triangulation angle a is in between 700 and 900. The light detector 120 acquires the rays reflected from the illuminated document 140. The captured rays form the image of the document 140. The acquired image is transmitted to the computing device 130. The computing device 130 receives the image through any suitable communication channel, e.g., Analog, USB, fire wire 1394a, 1394b, Camera link or Gigabit Ethernet depending on the communication interface provided on the light detector 120.
(0028] The computing device 130 includes the analyzing unit or an edge filter to filter the image into the upper and the lower edge. A component of the upper and the lower edge is obtained by an image processing engine of the analyzing unit. Essentially, the component of the upper and the lower edge is obtained by calculating a partial derivative of the upper and the lower edge, respectively. A deviation between the upper edge and the lower edge is determined by calculating a transformation parameter between the upper edge component and the lower edge component. The transformation parameter represents a difference or deviation between the upper edge and the lower edge. The transformation parameter is compared with a predefined value by a comparator unit. If the transformation parameter is equal to the predefined value then the raised print is detected. In one embodiment, the transformation parameter is
compared with a predefined range and if the deviation fails within the predefined range the raised print is detected.
[0029] The predefined value is a self-learned reference value generated on the fly prior to determining the raised print on the document 140. For generating the predefined value a plurality of test or sample documents of acceptable raised print height are collected. Each of the plurality of sample documents are illuminated and imaged sequentially such that the transformation parameter for each of the sample document is calculated. The calculated transformation parameters are stored as the predefined value in a storage medium associated with the computing device 130.
[0030] The self-learned reference value (predefined value) may be generated for a particular thickness or for a particular range of thickness of the raised print. For example, the predefined value may be generated corresponding to 0.1 mm thickness of the raised print. In this case, the pluralities of sample documents of 0.1 mm raised print are collected, and the predefined value is generated by the process discussed above. Therefore, the system 100 is trained to identify/ detect the thickness of the raised print of the document 140.
[0031] FIG. 2 shows a document 140 that does not include any raised print. The document 140 is placed on the sheet belt 150 and the image of the reflected structured light pattern reflected formed on the document 140 is imaged by the light detector 120. The light detector 120 transfers the image to the computing device 130. The computing device 130 generates a line profile of the image (filtered image), as best illustrated in FIG. 2A. The process of generating the line profile is described in detail with respect to Figs 4-5. The light source 110 used here is a laser line generator. The laser line profile or filtered image includes the upper and the lower edge. The Image processing engine obtains the component of the upper and the lower edge. Thereafter, the difference between the components of the upper and the lower edge is calculated. In the absence of raised print, as shown in FIG. 2A, the deviation between the upper and the lower edge component is negligible. In the absence of the raised print the depression in the upper and the lower edge is only due to absorption of structured light by the printed matter. Therefore, the depression in the upper edge and the lower edge is similar or same. This deviation between the upper and the lower edge components (transformation parameter) is then compared to the predefine values or the self-learned reference values, i.e., generated by using the plurality of genuine or test documents, as discussed above. The comparison value indicates the absence of the raised print on the document 140'. For example, if the deviation (transformation parameter) of the document 140 is different from the predefined self-learned reference value or is not falling within the predefined range, the absence of raised print is indicated.
[0032] FIG. 3 shows a document 140 that includes raised print 160 of required height. The document 140 is placed on the sheet belt 150 and the image of the reflected structured light pattern formed on the document 140 is acquired by the light detector 120. The light source 110 used here is a laser line generator. The light detector 120 transfers the image to the computing device 130. The computing device 130 generates the laser line profile of the image (filtered image), as best illustrated in FIG. 3A. The laser line profile or filtered image includes the upper and the lower edge. The process of generating the laser line profile is described in detail with respect to Figs" 4-5. The component of the upper and the lower edge is calculated. Due to presence of the raised print 160, the depression in the upper edge and the lower edge is different, i.e., where the laser line profile deviates from its usual profile (represented by regions 21-24). Essentially, the depression in the upper edge is less than the depression in the lower edge, because the depression in the upper edge is only due to the absorption of the laser light by the printed matter, whereas the depression in the lower edge is summation of absorption of the laser light by the printed matter and thickness of raised print 160. It has been observed that in printed security documents printed with raised prints these depressions are minute and this shift of laser profile due to the raised print is comparable to the shift due to absorption of light in the dark raised print e. g intaglio print. In laser triangulation techniques where the height of the object is significant, this depression of structured light due to the step height is very prominent and generally cannot be compared to the shift due to absorption of light on the surface. The deviation between the components of the upper and the lower edge is significant in raised security prints where step heights are small. This deviation (transformation parameter) value is compared to the predefined self-learned reference values, i.e., generated by using the plurality of genuine or test documents, as discussed above. The comparison value indicates the presence of the raised print 160 on the document 140". For example, if the deviation of the document 140 lies in the range of the predefined self-learned reference values, the presence of the raised print of required tactility 160 is indicated.
[0033] FIG. 4 shows generation of a digital gray scale image using a thresholding operation inside the computing device 130. After image acquisition the acquired image is treated for noise with the required image smoothing filter. The pixel gray value intensity ranges from 0 to 255 in an 8-bit digital monochrome image, 0 being the darkest and 255, the brightest pixel intensity. The pixels having pixel gray value intensity > 240 (bright pixels) are chosen and the laser line is identified, i.e.. the bright region of the laser line is located. FIG. 5 illustrates segmenting the generated digital gray scale image into the upper edge 25 and lower edge 26. The row and column location values of the pixels in the identified laser line are extracted at the upper (top) edge and the lower (bottom) edge of the laser line and the neighborhood pixels are connected to form an edge. Therefore, the upper edge of the segmented laser line and the lower edge of the segmented laser line are derived. The location of the pixels on the edge namely (x[i], y[i]) is saved for the upper edge (x_t[i], y_t[i]) and the lower edge (x_b[i], y_b[i]).
[0034] After deriving the upper and the lower edge, the derivative component of the upper and the lower edge is obtained by calculating the partial derivatives of the upper and lower edges. As the neighboring pixels are equally spaced the derivative may be calculated by the symmetric difference. For discrete 2D equally spaced (x, y) values the symmetric difference is calculated as:
y'[i] = (y[i+1]-y[i-1])/(x[i+1]-x[i-1])
wherein i=l to image width-2 (the index of pixels varies from 0 to image width-1)
lower edge respectively from the current laser line image.
[0035] Once the derivative functions (components) of the upper edge and the lower
edge are calculated, the transformation parameter between the two derivative functions
is calculated. The following linear model is used for the transformation between the
two functions:
y"_t(x) = al *y*_b (x) + a2 ,
where in x = 1 to image width-2
Initial values for al = l, a2=0. The values of the function y "b are obtained by linear
interpolation after substituting the initial values of al and a2. The transformation
parameter namely al is determined by the least-squares minimization of the following
function:
Thereby the best fit coefficients al and a2, also referred as transformation parameters are calculated.
[0036] The best fit transformation parameter al contains information about how different the upper edge and the lower edge are with respect to each other. Therefore, the best fit transformation parameter al is calculated for the genuine printed matter (sample document) or for a particular thickness of the raised print of the genuine document 140. The coefficient a1 is then used as a predefined value or a predefined reference value to detect the raised print or the thickness of the raised print on the document 140.
[0038] [0037] The predefined range of values may also be generated for detecting the raised print on the document 140. For determining the predefined range, pluralities of sample documents are collected and each of the plurality of sample documents is passed under the structured laser light source 110. As the raised print of the sample document comes in contact with the laser line the image is acquired. Several scans of laser line are taken and the acquired image is processed for finding the best fit transformation parameter al, as discussed above. Every image processed yields a value for the parameter al. The range of values of al is calculated from several passes of each of the pluralities of the sample documents. The range of values of al is stored as the predefined range. The predefined range is compared with the parameter al" acquired while scanning the document 140 to be detected. If the value of al' is not falling within the range of al then the required print thickness is not detected. Therefore, presence of raised print of the required print thickness is detected based upon this comparison. It is determined by experiment that al < 0.31 indicates the intaglio print for Indian currency note Rs. 100, 500, and 1000. Further, it is found that for FIG. 2A the value of al is 0.02 (intaglio print). Range of value for al>0.3l may indicate a non intaglio print. Typically al > 0.7 indicates a high degree of similarity between the upper edge and the lower edge. Ideally al>0.90 indicates that the upper edge (laser line) and lower edge are very similar. Further, a non intaglio printed wrinkled surfaces have a range of value for al > 0.4, For example, for FIG. 3A the value of al is 0.6773 (non intaglio print). The values of al mentioned in this patent specification are for a particular model of imaging device and its optics, triangulation angle and bank note model and would vary for other imaging setups. The intaglio print is generally a print of very small thickness, i.e., of small height, e.g., height less than O l mm in the order of few microns. The system 100 may be trained for detecting the intaglio print of few microns. Pluralities of sample documents of l00micron print thickness are collected and each of the plurality of sample documents is passed under the light source 110. Several scans of laser line are taken and the acquired image is
processed for finding the best fit transformation parameter a I, as discussed above. Every image processed yields a value for the parameter al. The range of values of al is calculated from several passes of each of the pluralities of the sample documents. The range of values of al is stored as the predefined range. Now, the current document that is to be tested is taken and passed under the light source 110. Several scans of laser line are taken and the acquired images of the current document 140 is processed for finding the best fit parameter, e.g., the best fit parameter for the current document comes out to be aT. Then al' is compared with the predefined range al. It is possible that for a particular pass of a printed document some laser scans acquired by the imaging device are from areas of the document where the best fit parameter does not satisfy the criteria (no intaglio print present). In this case it is required to analyze al" and find several minimum values of the best fit parameter and then check if these minimums fall in the predefined range. If the value of al' is falling within the range of al then the intaglio print is detected. Further, it can be indicated that the intaglio is of 100 micron print thickness.
[0039] As shown in Fig. 6, both sides (surfaces) of the document 140 may also be imaged by the system 100. The top surface of the document 140 is illuminated by a first structured light source 610a and the bottom surface of the document 140 is illuminated by a second structured light source 610b. Both the structured light sources 610 (a, b) are placed perpendicular to the surface of the document 140. The structured light sources 610a, 610b are offset in horizontal direction (along the surface of the document 140) by a small amount. The offset ensures that the structured light sources 610 (a, b) do not image the same region of the document 140 in the top and the bottom surface. The light reflected from the top surface is imaged by a first light detector 620a, placed at a triangulation angle a relative to the structured light source 610a. The light reflected from the bottom surface is imaged by a second light detector 620b, placed at a triangulation angle β relative to the structured light source 610b. Both the acquired images are then transmitted to the computing device 130 for detecting the raised print according to the method discussed above.
[0040] Usually, the value of a and β is kept high, e.g., equal to 80° or close to 90° for better analysis of the laser line as the higher triangulation angle maximizes the visibility of the deviation of the reflected laser line formed on the printed surface. In one embodiment of the invention, the structured light source 110 is a laser line generator, the light detector 120 is an area scan camera, and the computing device 130 is any device capable of processing the image, e.g., a Personal Computer (PC), a server, or an embedded device with sufficient memory, processing power, and communication capability. In another embodiment, the light source 110 is any structured light source 110. e.g.. a structured LED line light source. In one possible embodiment the camera directly receives only the reflected laser line and a FPGA circuit coupled to the camera performs the segmentation. In one possible embodiment the camera is an areas scan ccd, cmos or a smart area scan camera with processing station inbuilt having either a ccd or a cmos sensor.
[0041] FIG. 7 is a flow chart illustrating the method of determining the predefined value al. Structured light of a known linear structure is projected onto a printed document of known acceptable print tactility (step 700). The photo detector arranged at a triangulation angle acquires the pattern of the reflected structured light formed on the document surface (step 701).The image of the reflected light pattern is formed in the photo detector which is transferred to the computing device 130 (step 702). The computing device 130 includes the image processing engine. The image processing engine filters the image into the upper edge and the lower edge (step 703). Further, the partial derivative (component) of the upper and the lower edge component is calculated (step 704). The deviation between the upper edge component and the lower edge component is determined by calculating the transformation parameter between the partial derivative of the upper and the lower edge (step 705). The transformation parameter namely al is stored in buffer for each sample of printed document of acceptable print height, (step 706). Subsequently all the samples are inspected, and each value of al associated with the printed document is stored (step 707). A range of values of al is determined, namely the minimum and maximum of the al (step 708). This range of value of al is used to check the tactility of printed documents. [0043] [0042] FIG. 8 is a flowchart illustrating a method for detecting the raised print on the document 140 according to an embodiment of the invention. Structured light 110 of a known linear structure is projected onto a printed document 140. (step 800).
The photo detector arranged at a triangulation angle acquires the pattern of the reflected structured light formed on the document surface (step 801). The image of the reflected light pattern is formed in the photo detector which is transferred to the computing device 130 (step 802). The computing device 130 includes the image processing engine. The image processing engine filters the image into the upper edge and the lower edge (step 803). Further, the partial derivative (component) of the upper and the lower edge component is calculated (step 804). The deviation between the upper edge component and the lower edge component is determined by calculating the transformation parameter between the partial derivative of the upper and the lower edge (step 805). Is the transformation parameter within the predefined range? (step 806: YES) the required raised print 160 is detected (step 807). Is the transformation parameter not within the predefined range (step 806: NO) the required raised print 160 is not detected (step 808) indicating that the document does not have the required print tactility. In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as refiecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
[0044] It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively.
We Claim:
1) A method for detecting a raised print on a security document, the method
comprising the steps of:
(i) Illuminating the surface of a security document with structured light;
(ii) Capturing at a predetermined triangulation angle the reflected structured light pattern formed on the security document to obtain an image of the same;
(iii) Transforming the obtained image of the reflected structured light pattern into an upper edge component and a lower edge component; and (iv)Comparing a difference between the upper edge component and the lower edge component with a predefined value to detect the raised print.
2) The method of claim 1 wherein the structured light comprises of a line of light.
3) The method of claim 1 where illuminating the document with structured light comprises projecting a line of light onto the surface of a security document.
4) The method of claim 1 where the predetermined triangulation angle is the angle subtended by the structured light source and the apparatus for capturing the reflected structured light.
5) The method of claim l where the value of the predetermined angle is in the range of 80 to 90 degrees.
6) The method of claim 1, wherein transforming the image into the upper edge and the lower edge component comprises calculating a derivative of the upper edge and the lower edge of the image.
7) The method of claim 1, wherein the predefined value is a self-learned range of value generated on fly by determining the difference between the upper edge component and the lower edge component for a plurality of test documents and storing a range of calculated differences as the predefined value.
8) The method of claim 7, wherein the step of detecting the raised print on the document is subsequent to the step of generating the predefined value.
9) A system for detecting a raised print on a document comprising:
an imaging unit for obtaining image of the document; and
an analyzing unit for transforming the image and detecting the presence of the raised print.
10) The system of claim 9, wherein the analyzing unit comprises of:
an image processing engine configured to transform the image into an upper edge component and a lower edge component;
and a comparator module to compare a difference between the upper edge component and the lower edge component with a predefined value to detect the raised print.
11) The system of claim 9, wherein the imaging unit comprises of:
a structured light source for illuminating the document; and a detector unit for capturing the image of the document.
12) The system of claim 11, wherein the detector unit is positioned at a predefined
angle relative to the structured light source for obtaining the image of the reflected
structured light pattern formed on the document surface.
13) The system of claim 11, wherein the said detector unit uses a complementary metal oxide semiconductor (CMOS) sensor or a charged coupled device (CCD) sensor.
14) The system of claim 11, where in the structured light is a line of light generated by a Laser line light source or Light Emitting Diode (LED) line light source.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 3294-che-2010 correspondence others 04-11-2010.pdf | 2010-11-04 |
| 1 | 3294-CHE-2010-FORM 13 [11-02-2020(online)].pdf | 2020-02-11 |
| 2 | 3294-che-2010 power of attorney 04-11-2010.pdf | 2010-11-04 |
| 2 | 3294-CHE-2010-RELEVANT DOCUMENTS [11-02-2020(online)].pdf | 2020-02-11 |
| 3 | 3294-CHE-2010-RELEVANT DOCUMENTS [10-02-2020(online)].pdf | 2020-02-10 |
| 3 | 3294-che-2010 form-5 04-11-2010.pdf | 2010-11-04 |
| 4 | 3294-CHE-2010-EVIDENCE FOR REGISTRATION UNDER SSI [06-02-2020(online)].pdf | 2020-02-06 |
| 4 | 3294-che-2010 form-3 04-11-2010.pdf | 2010-11-04 |
| 5 | 3294-CHE-2010-FORM FOR SMALL ENTITY [06-02-2020(online)].pdf | 2020-02-06 |
| 5 | 3294-che-2010 form-2 04-11-2010.pdf | 2010-11-04 |
| 6 | 3294-CHE-2010-IntimationOfGrant30-12-2019.pdf | 2019-12-30 |
| 6 | 3294-che-2010 form-1 04-11-2010.pdf | 2010-11-04 |
| 7 | 3294-CHE-2010-PatentCertificate30-12-2019.pdf | 2019-12-30 |
| 7 | 3294-che-2010 drawings 04-11-2010.pdf | 2010-11-04 |
| 8 | 3294-CHE-2010_Abstract_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 8 | 3294-che-2010 description(complete) 04-11-2010.pdf | 2010-11-04 |
| 9 | 3294-che-2010 claims 04-11-2010.pdf | 2010-11-04 |
| 9 | 3294-CHE-2010_Claims_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 10 | 3294-che-2010 abstract 04-11-2010.pdf | 2010-11-04 |
| 10 | 3294-CHE-2010_Description_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 11 | 3294-CHE-2010 FORM-18 12-01-2012.pdf | 2012-01-12 |
| 11 | 3294-CHE-2010_Drawing_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 12 | 3294-CHE-2010 CORRESPONDENCE OTHERS 12-01-2012.pdf | 2012-01-12 |
| 12 | 3294-CHE-2010_Marked up Claims_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 13 | 3294-CHE-2010-FER.pdf | 2018-07-17 |
| 13 | 3294-CHE-2010-Response to office action (Mandatory) [05-11-2019(online)].pdf | 2019-11-05 |
| 14 | 3294-CHE-2010-AMMENDED DOCUMENTS [04-11-2019(online)].pdf | 2019-11-04 |
| 14 | 3294-CHE-2010-RELEVANT DOCUMENTS [17-01-2019(online)].pdf | 2019-01-17 |
| 15 | 3294-CHE-2010-FORM 13 [04-11-2019(online)].pdf | 2019-11-04 |
| 15 | 3294-CHE-2010-FORM 13 [17-01-2019(online)].pdf | 2019-01-17 |
| 16 | 3294-CHE-2010-FER_SER_REPLY [17-01-2019(online)].pdf | 2019-01-17 |
| 16 | 3294-CHE-2010-FORM-26 [04-11-2019(online)].pdf | 2019-11-04 |
| 17 | 3294-CHE-2010-MARKED COPIES OF AMENDEMENTS [04-11-2019(online)].pdf | 2019-11-04 |
| 17 | 3294-CHE-2010-HearingNoticeLetter-(DateOfHearing-25-10-2019).pdf | 2019-10-14 |
| 18 | 3294-CHE-2010-Correspondence to notify the Controller (Mandatory) [25-10-2019(online)].pdf | 2019-10-25 |
| 18 | 3294-CHE-2010-Proof of Right (MANDATORY) [04-11-2019(online)].pdf | 2019-11-04 |
| 19 | 3294-CHE-2010-Annexure (Optional) [25-10-2019(online)].pdf | 2019-10-25 |
| 20 | 3294-CHE-2010-Correspondence to notify the Controller (Mandatory) [25-10-2019(online)].pdf | 2019-10-25 |
| 20 | 3294-CHE-2010-Proof of Right (MANDATORY) [04-11-2019(online)].pdf | 2019-11-04 |
| 21 | 3294-CHE-2010-HearingNoticeLetter-(DateOfHearing-25-10-2019).pdf | 2019-10-14 |
| 21 | 3294-CHE-2010-MARKED COPIES OF AMENDEMENTS [04-11-2019(online)].pdf | 2019-11-04 |
| 22 | 3294-CHE-2010-FER_SER_REPLY [17-01-2019(online)].pdf | 2019-01-17 |
| 22 | 3294-CHE-2010-FORM-26 [04-11-2019(online)].pdf | 2019-11-04 |
| 23 | 3294-CHE-2010-FORM 13 [04-11-2019(online)].pdf | 2019-11-04 |
| 23 | 3294-CHE-2010-FORM 13 [17-01-2019(online)].pdf | 2019-01-17 |
| 24 | 3294-CHE-2010-RELEVANT DOCUMENTS [17-01-2019(online)].pdf | 2019-01-17 |
| 24 | 3294-CHE-2010-AMMENDED DOCUMENTS [04-11-2019(online)].pdf | 2019-11-04 |
| 25 | 3294-CHE-2010-Response to office action (Mandatory) [05-11-2019(online)].pdf | 2019-11-05 |
| 25 | 3294-CHE-2010-FER.pdf | 2018-07-17 |
| 26 | 3294-CHE-2010 CORRESPONDENCE OTHERS 12-01-2012.pdf | 2012-01-12 |
| 26 | 3294-CHE-2010_Marked up Claims_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 27 | 3294-CHE-2010 FORM-18 12-01-2012.pdf | 2012-01-12 |
| 27 | 3294-CHE-2010_Drawing_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 28 | 3294-che-2010 abstract 04-11-2010.pdf | 2010-11-04 |
| 28 | 3294-CHE-2010_Description_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 29 | 3294-che-2010 claims 04-11-2010.pdf | 2010-11-04 |
| 29 | 3294-CHE-2010_Claims_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 30 | 3294-che-2010 description(complete) 04-11-2010.pdf | 2010-11-04 |
| 30 | 3294-CHE-2010_Abstract_Granted 328412_30-12-2019.pdf | 2019-12-30 |
| 31 | 3294-CHE-2010-PatentCertificate30-12-2019.pdf | 2019-12-30 |
| 31 | 3294-che-2010 drawings 04-11-2010.pdf | 2010-11-04 |
| 32 | 3294-CHE-2010-IntimationOfGrant30-12-2019.pdf | 2019-12-30 |
| 32 | 3294-che-2010 form-1 04-11-2010.pdf | 2010-11-04 |
| 33 | 3294-CHE-2010-FORM FOR SMALL ENTITY [06-02-2020(online)].pdf | 2020-02-06 |
| 33 | 3294-che-2010 form-2 04-11-2010.pdf | 2010-11-04 |
| 34 | 3294-CHE-2010-EVIDENCE FOR REGISTRATION UNDER SSI [06-02-2020(online)].pdf | 2020-02-06 |
| 34 | 3294-che-2010 form-3 04-11-2010.pdf | 2010-11-04 |
| 35 | 3294-CHE-2010-RELEVANT DOCUMENTS [10-02-2020(online)].pdf | 2020-02-10 |
| 35 | 3294-che-2010 form-5 04-11-2010.pdf | 2010-11-04 |
| 36 | 3294-CHE-2010-RELEVANT DOCUMENTS [11-02-2020(online)].pdf | 2020-02-11 |
| 36 | 3294-che-2010 power of attorney 04-11-2010.pdf | 2010-11-04 |
| 37 | 3294-che-2010 correspondence others 04-11-2010.pdf | 2010-11-04 |
| 37 | 3294-CHE-2010-FORM 13 [11-02-2020(online)].pdf | 2020-02-11 |
| 1 | 3294CHE2010_17-07-2018.pdf |