Sign In to Follow Application
View All Documents & Correspondence

Cognitive Static Signature Identification And Verification

Abstract: Static signatures such as handwritten or digital signatures play an important role in financial, legal and commercial transactions, and in many scenarios the most preferred means of authentication. Conventional approaches have resulted in low precision, involving high computational cost and are prone to errors. Embodiments of the present disclosure provide systems and methods that implement techniques for cognitive static signature identification and verification wherein few areas of input document are identified as candidate signature areas and noise reduction technique is applied to obtain denoised image. Regions of interest are dilated to identify contours and a merged contour is obtained thereof. Features are extracted from merged counter to determine presence of signature. Present disclosure further employs a fully connected neural network that extracts features from the signature and computes genuine and forged scores for each user and further predicts whether the signature belongs to a user who claims to be his/hers. [To be published with FIG. 2]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 November 2019
Publication Number
22/2022
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-11-25
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai 400021 Maharashtra, India

Inventors

1. SEEGI, Sunil
Tata Consultancy Services Limited Deccan Park, Plot No 1, Survey No. 64/2, Software Units Layout, Serilingampally Mandal, Madhapur, Hyderabad 500034 Telangana, India
2. ARORA, Deepti
Tata Consultancy Services Limited Ground, 1st to 8th Floor, Tower- 2, Okaya Center, Plot No. B- 5, Sector- 62, Noida 201309 Uttar Pradesh, India
3. BANSAL, Himanshu
Tata Consultancy Services Limited Skyview Corporate Park, Sector -74A, Village Narsinghpur, NH-8, Exit -11A, Gurgaon 122001 Haryana, India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
COGNITIVE STATIC SIGNATURE IDENTIFICATION AND
VERIFICATION
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD [001] The disclosure herein generally relates to signature identification, extraction and verification techniques, and, more particularly, to cognitive static signature identification and verification.
BACKGROUND [002] Handwritten or digital signatures play an important role in financial, legal and commercial transactions, and is in most scenarios the most preferred means of authentication. Back office processes involve dealing with scanned documents. Users need to identify signatures on the documents and verify their authenticity before proceeding with the concerned process. Conventional approaches involve manually validation of such handwritten or digital signatures and thereby lead to huge business and financial risk. When it comes to handwritten signatures there is a high intra-class variability because signatures from the same person vary from time to time due to ageing and variations in signing position, which also depend on use of writing material such as pen, ink, writing surface, and the like. It is difficult to detect forgeries or counterfeits when forged signatures resemble with genuine ones to large extent. Traditionally, attempts have been made to mitigate manual validation of signatures by introducing various validation techniques, for instance use of convolution neural networks or deep learning technique for increasing accuracy. Though accuracy improvement has been observed by implementing above traditional techniques, but these have resulted in low precision and involve high computational cost. Further, it is also observed that pre-processing techniques involved in above traditionally known methods are not very effective, specifically wherein there is high noise in the signatures and are further prone to errors.
SUMMARY [003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in

one aspect, there is provided a processor implemented method for cognitive signature identification and verification. The method comprises obtaining, via one or more hardware processors, an input document comprising a static signature of a user, wherein the signature is in a format comprising a color format or a grayscale format; converting, via the one or more hardware processors, the format of the static signature to a black and white image, by using a binarization technique; applying, via the one or more hardware processors, at least one filter type on the black and white image to obtain a denoised image, wherein the at least one filter type is applied on the black and white image based on type of noise comprised in the black and white image; removing, via the one or more hardware processors, one or more vertical lines from the denoised image to obtain one or more regions of interest; dilating, via the one or more hardware processors, the regions of interest by using a horizontal-shaped kernel to obtain a dilated image; identifying, via the one or more hardware processors, a plurality of contours in the dilated image based on one or more pre-defined thresholds; merging, via the one or more hardware processors, at least subset of the plurality of contours based on one or more criteria to obtain a merged contour; extracting, via the one or more hardware processors, one or more features from the merged contour; and determining, via the one or more hardware processors, a presence of the static signature in the merged contour using the extracted one or more features. In an embodiment, the one or more predefined thresholds comprise (i) identifying one or more contours that have width and height greater than a threshold width and height respectively, (ii) identifying one or more contours that have width and height less than a threshold percentage of a document width and height respectively, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have a pixel density less than most frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height.
[004] In an embodiment, the one or more criteria comprise at least one of two or more contours that overlap with each other by a determined threshold

vertically and two or more contours that overlap with each other by a determined threshold horizontally.
[005] In an embodiment, the type of noise comprises at least one of Gaussian noise, Salt and pepper noise, Poisson noise, and Speckle noise.
[006] In an embodiment, the processor implemented method further comprises: obtaining, at least one signature comprises the determined static signature or a new signature; blurring, by using a blurring technique, the at least one signature to obtain a denoised image; converting the denoised image to a black and white image; applying a skeletonisation technique on the black and white image to obtain a reduced width image, each pixel in the reduced width image is of a 1-pixel stroke width; extracting one or more features from the reduced width image; processing the extracted one or more features in a fully connected neural network to obtain genuine and forged scores corresponding to a plurality of users; identifying a genuine and forged score corresponding to the user amongst the genuine and forged scores; and verifying the at least one signature as belonging to a user based on the identified genuine and forged score.
[007] In another aspect, there is provided a processor implemented
system for cognitive signature identification and verification. The system
comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain an input document comprising a static signature of a user, wherein the signature is in a format comprising a color format or a grayscale format; convert the format of the static signature to a black and white image, by using a binarization technique comprised in the memory; apply at least one filter type on the black and white image to obtain a denoised image, wherein the at least one filter type is comprised in the memory and is applied on the black and white image based on type of noise comprised in the black and white image; remove one or more vertical lines from the denoised image to obtain one or more regions of interest; dilate, via the one or more hardware processors, the regions of interest by using a horizontal-shaped kernel comprised in the

memory to obtain a dilated image; identifying a plurality of contours in the dilated image based on one or more pre-defined thresholds; merging at least subset of the plurality of contours based on one or more criteria to obtain a merged contour; extracting one or more features from the merged contour; and determine a presence of the static signature in the merged contour using the extracted one or more features.
[008] In an embodiment, the one or more predefined thresholds comprise (i) identifying one or more contours that have width and height greater than a threshold width and height respectively, (ii) identifying one or more contours that have width and height less than a threshold percentage of a document width and height respectively, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have a pixel density less than most frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height.
[009] In an embodiment, the one or more criteria comprise at least one of two or more contours that overlap with each other by a determined threshold vertically and two or more contours that overlap with each other by a determined threshold horizontally.
[010] In an embodiment, the type of noise comprises at least one of Gaussian noise, Salt and pepper noise, Poisson noise, and Speckle noise.
[011] In an embodiment, the one or more hardware processors are further configured by the instructions to: obtain, at least one signature comprises the determined static signature or a new signature; blur, by using a blurring technique comprised in the memory, the at least one signature to obtain a denoised image; convert the denoised image to a black and white image; apply a skeletonisation technique on the black and white image to obtain a reduced width image, each pixel in the reduced width image is of a 1-pixel stroke width; extract one or more features from the reduced width image; process the extracted one or more features in a fully connected neural network to obtain genuine and forged scores corresponding to a plurality of users; identify, via the fully connected neural network, a genuine and forged score corresponding to the user amongst the

genuine and forged scores; and verify, via the fully connected neural network, the at least one signature as belonging to a user based on the identified genuine and forged score.
[012] In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause cognitive signature identification and verification by obtaining, via the one or more hardware processors, an input document comprising a static signature of a user, wherein the signature is in a format comprising a color format or a grayscale format; converting, via the one or more hardware processors, the format of the static signature to a black and white image, by using a binarization technique; applying, via the one or more hardware processors, at least one filter type on the black and white image to obtain a denoised image, wherein the at least one filter type is applied on the black and white image based on type of noise comprised in the black and white image; removing, via the one or more hardware processors, one or more vertical lines from the denoised image to obtain one or more regions of interest; dilating, via the one or more hardware processors, the regions of interest by using a horizontal-shaped kernel to obtain a dilated image; identifying, via the one or more hardware processors, a plurality of contours in the dilated image based on one or more pre-defined thresholds; merging, via the one or more hardware processors, at least subset of the plurality of contours based on one or more criteria to obtain a merged contour; extracting, via the one or more hardware processors, one or more features from the merged contour; and determining, via the one or more hardware processors, a presence of the static signature in the merged contour using the extracted one or more features.
[013] In an embodiment, the one or more predefined thresholds comprise (i) identifying one or more contours that have width and height greater than a threshold width and height respectively, (ii) identifying one or more contours that have width and height less than a threshold percentage of a document width and height respectively, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have a pixel density less than most

frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height.
[014] In an embodiment, the one or more criteria comprise at least one of two or more contours that overlap with each other by a determined threshold vertically and two or more contours that overlap with each other by a determined threshold horizontally.
[015] In an embodiment, the type of noise comprises at least one of Gaussian noise, Salt and pepper noise, Poisson noise, and Speckle noise.
[016] In an embodiment, the one or more instructions which when executed by one or more hardware processors further cause: obtaining, at least one signature comprises the determined static signature or a new signature; blurring, by using a blurring technique, the at least one signature to obtain a denoised image; converting the denoised image to a black and white image; applying a skeletonisation technique on the black and white image to obtain a reduced width image, each pixel in the reduced width image is of a 1-pixel stroke width; extracting one or more features from the reduced width image; processing the extracted one or more features in a fully connected neural network to obtain genuine and forged scores corresponding to a plurality of users; identifying a genuine and forged score corresponding to the user amongst the genuine and forged scores; and verifying the at least one signature as belonging to a user based on the identified genuine and forged score.
[017] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS [018] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:

[019] FIG. 1 depicts an exemplary block diagram of a system for cognitive static signature identification and verification, in accordance with an embodiment of the present disclosure.
[020] FIG. 2 depicts an exemplary flow chart for cognitive static signature identification and verification method using the system of FIG. 1 in accordance with an embodiment of the present disclosure.
[021] FIG. 3A depicts an input image document comprising a handwritten signature received as an input by the system of FIG. 1, in accordance with an example embodiment of the present disclosure.
[022] FIG. 3B illustrates conversion of the input (image) document comprising a handwritten signature to a black and white image by the system of FIG. 1, in accordance with an example embodiment of the present disclosure.
[023] FIG. 3C illustrates a denoised image upon applying a specific filter based on a noise type identified using the system of FIG. 1, in accordance with an example embodiment of the present disclosure.
[024] FIG. 3D depicts an image document comprising a signature and one or more vertical lines removed by the system of FIG. 1 in accordance with an example embodiment of the present disclosure.
[025] FIG. 3E depicts a dilated image in accordance with an example embodiment of the present disclosure.
[026] FIG. 3F depicts an image document comprising a plurality of contours identified by the system of FIG. 1 in accordance with an example embodiment of the present disclosure.
[027] FIG. 3G depicts an image document comprising the signature and illustrates a merged contour in accordance with an example embodiment of the present disclosure.
[028] FIG. 3H depicts a presence of signature being determined in the input document based on one or more extracted features using the system of FIG. 1 in accordance with an example embodiment of the present disclosure.

[029] FIG. 4 depicts various stages of signature verification process carried out by the method of the FIG. 2 by using the system of FIG. 1 in accordance with an example embodiment of the present disclosure.
[030] FIG. 5 illustrates a plurality of blocks depicting various extracted features in accordance with an embodiment of the present disclosure.
[031] FIGS. 6A-6B depicts signature verification output for a given input signature in accordance with an example embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [032] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[033] As mentioned above, static signatures such as handwritten or digital signatures play an important role in financial, legal and commercial transactions, and is in most scenarios the most preferred means of authentication. Conventional approaches involve manually validation of such handwritten or digital signatures and thereby lead to huge business and financial risk. It is therefore difficult to detect forgeries or counterfeits when forged signatures resemble with genuine ones to large extent. While traditionally attempts are made to mitigate manual validate of signatures by introducing various validation techniques, for instance use of convolution neural networks or deep learning technique) for increasing accuracy. Though accuracy improvement has been observed by implementing above traditional techniques, but these have resulted in low precision and involve high computational cost. Further, it is also observed that pre-processing techniques involved in above traditionally known methods are

not very effective, specifically wherein there is high noise in the signatures and are prone to errors.
[034] Embodiments of the present disclosure provide systems and methods for cognitive signature identification and verification. Present disclosure includes signature identification technique that involves identification of possible areas that may be signature in a scanned page/ (or digital document). The system (or techniques executed by the system of the present disclosure) identifies few areas in the input image that are candidate signature areas. It then filters out those areas that contain only noise, handwriting (e.g., as seen in handwritten and digital signatures), typed text and logos. It uses machine learning to filter out non-signature areas. Signature is identified in a scanned colored or grayscale image that contain text of mostly similar font size. The technique can filter out both high white noise and high black noise from the scanned image document. On the other hand, signature verification as implemented by the present disclosure involves predicting whether an input signature belongs to the person who claims the signature to be his. The present disclosure implements a Fully Connected Neural Network with input neurons equal to the number of identified features (specified later), some hidden neurons (specified later) and output neurons same as twice the number of signature owners. For each signature to be predicted, features required for input neurons are extracted and scores for all the person’s genuine and forged signature are computed. The genuine and forged score of the person, who claims to be his signature are compared, and accordingly the system of the present disclosure predicts genuine or forgery of the signature and further determines user’s authenticity. For instance, if genuine score is higher than forged then the system predicts as genuine and belonging to the person who claims it. Else, predicts as forged if forged score is higher than genuine.
[035] Referring now to the drawings, and more particularly to FIGS. 1 through 6B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.

[036] FIG. 1 depicts an exemplary block diagram of a system 100 for cognitive static signature identification and verification, in accordance with an embodiment of the present disclosure. The system 100 may also be referred as ‘status signature identification and verification system (SSIVS)’ or ‘cognitive system’ and may be interchangeably used hereinafter. In an embodiment, the system 100 includes one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106 (also referred as interface(s)), and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[037] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[038] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an

embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises information pertaining to signature(s) of users, for example, static signatures such as handwritten signatures, digital signatures, and the like. In an embodiment, the memory 102 may store (or stores) one of more techniques (e.g., binarization technique(s), denoiser(s), vertical line remover (or image processing technique), dilating technique(s), contours identification technique(s), features extraction technique(s), blurring technique(s) such as median blur technique, skeletonisation technique such as stroke width reduction technique/thinning technique, and the like). The memory 102 further comprises information pertaining to noise type(s) comprised in the signature(s), or input document containing signature, wherein the noise type can comprise but are not limited to, Gaussian noise, Salt and pepper noise, Poisson noise, Speckle noise, and the like.
[039] The memory 102 further comprises one or more fully connected neural networks, wherein the one or more fully connected neural networks comprises a feedforward neural network. The memory 102 further comprises one or more scores computed by a fully connected neural network comprised in the memory 102, wherein scores computed pertaining forged signature scores, genuine signature scores which can be used to validate/verify users claiming his/her signature. The memory 102 further comprises information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[040] FIG. 2 depicts an exemplary flow chart for cognitive static signature identification and verification using the system 100 of FIG. 1 in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be

explained with reference to components of the system 100 of FIG. 1, the flow
diagram as depicted in FIG. 2 and the FIGS. 3A to 6B. At step 202 of the present
disclosure, the one or more hardware processors 104 obtain an input document
comprising a static signature. The static signature can be one or more of
handwritten signature(s) or a digital signature(s). For instance, FIG. 3A
illustrates an image document that depicts a static signature such as a handwritten signature, wherein the handwritten signature is in a format comprising a color format or a grayscale format. More specifically, FIG. 3A, with reference to FIGS. 1 through 2, depicts an input image document comprising a handwritten signature received as an input by the system 100 of FIG. 1, in accordance with an example embodiment of the present disclosure. In the present disclosure, image document as depicted in FIG. 3A comprising signature was obtained from a publicly available dataset (e.g., Tobacco 800 dataset).
[041] At step 204 of the present disclosure, the one or more hardware processors 104 convert the format of the static signature to a black and white image, by using a binarization technique. For instance, FIG. 3B, with reference to FIGS. 1 through 3A, illustrates conversion of the input (image) document comprising a handwritten signature to a black and white image by the system 100 of FIG. 1, in accordance with an example embodiment of the present disclosure. In case if the input document is color image document, the color image document may be converted to gray scale image document prior to conversion of the input image document to black and white image document. In other words, the color image document is first converted to a gray scale image document and then the gray scale image document is converted to the black and white image document.
[042] At step 206 of the present disclosure, the one or more hardware processors 104 apply at least one filter type on the black and white image (or black and white image document depicted in FIG. 3B) to obtain a denoised image. For instance, the at least one filter type is applied on the black and white image based on type of noise comprised in the black and white image. For example, a document when scanned may contain a signature and can have any type of noise. For example, various type of noises comprises but are not limited to, Gaussian

noise, Salt (random white pixel) and pepper (random black pixel) noise, Poisson noise, Speckle noise, and the like. The filter type is identified by the system 100 and accordingly appropriate filter amongst filters comprised in the memory 102 and executed by the system 100 of FIG. 1 is invoked and applied on the noise identified. Upon applying an appropriate filter on the black and white image a denoised image is outputted by the system 100. For instance, assuming that the black and white image as outputted in step 204 say comprises Gaussian noise, then Gaussian filter and/or Mean filter may be identified by the system 100 and at least one of the Gaussian filter and Mean filter is applied on the black and white image to output a denoised image. Similarly, if the noise type is identified as Salt (random white pixel) and pepper (random black pixel) noise, the Median filter type is identified by the system and the median filter is applied on the black and white image to output a denoised image that is free of the Salt (random white pixel) and pepper (random black pixel) noise. Similarly, if the noise type is identified as Poisson noise or Speckle noise, Weiner filter type is identified by the system and the Weiner filter is applied on the black and white image to output a denoised image that is free of the Poisson noise or Speckle noise.
[043] FIG. 3C, with reference to FIGS. 1 through 3B, illustrates a denoised image upon applying a specific filter based on a noise type identified using the system 100 of FIG. 1, in accordance with an example embodiment of the present disclosure.
[044] It is to be understood by a person having ordinary skill in the art that the above-mentioned noise type(s) and identified filters shall not be construed as limiting the scope of the present disclosure. In other words, Bilateral filter which works like Gaussian filter and preserves edges may also be used for Gaussian noise filtration in the black and white image. This filter is applicable more to images rather than scanned documents. Further, non-local Means Denoising algorithm/filter, which looks for similar patches to a subject patch, takes a mean and then remove variations. This filter may not be applicable for the example under consideration in the present disclosure, because a typed text may most likely not have similar patches (different text typed) and noises can have

similar patches. While a scanned document (subject of this discussion in the present disclosure), can contain any of the above noises and any of the above filters can be applicable, however, a scanned document has its processing needs. Very dark regions of a scan possibly hint at noise; however, small font and densely typed text can resemble noise. In addition, scattering of dots and lines in a scanned document can be construed as noise, while that can be a dot at the top of “i” or “j” or part of signature. Thus, a specific filter to remove noise from a scanned document is required. In the present disclosure, a combination of a denoiser and one of the filters (as known in the conventional art) have been used and implemented to create a denoiser required for implementing the systems and methods of the present disclosure.
[045] Below description illustrates removal of thick black noise if comprised in the black and white image. The present disclosure illustrates various steps as depicted below for removal of thick black noise:
1. Divide a scanned document in a grid of x*y pixel (e.g., 16*16 pixels wherein the grid size can be adjusted as per the document). Let each square be referred as a cell.
2. For each of the cell, it is determined whether the pixel density of the cell is greater than the higher threshold pixel density and there is at least one of the neighborhood cells that has pixel density greater than the higher threshold pixel density. If this condition is satisfied, then:
a. The system 100 whitens out the cell.
b. The system traverses from left side of the cell going towards the
left of the page, clearing out all the filled pixels until it encounters
a threshold number of unfilled pixels. If that happens, it stops
going left further.
c. Like traversal towards left side, the system traverses towards the
top of the page from the top of the cell. It stops when it encounters
a threshold number of unfilled pixels and then it stops traversal.
d. Likewise, the system traverses from right side of the cell towards
the right of the page and from bottom side of the cell towards the

bottom of the page. It stops when it encounters a threshold number
of unfilled pixels and then it stops traversal. [046] Below description illustrates removal of white noise if comprised in the black and white image. The present disclosure illustrates various steps as depicted below for removal of white noise:
1. Divide a scanned document in a grid of x*y pixel (e.g., 16*16 pixels wherein the grid size can be adjusted as per the document). Let each square be referred as a cell.
2. For each of the cell, it is determined whether the pixel density of the cell is less than the lower threshold pixel density and there is at least one of the neighborhood cells that has pixel density less than the lower threshold pixel density. If the above condition is satisfied, then:
a. The system whitens out that cell. [047] Referring to steps of FIG. 2, at step 208 of the present disclosure, the one or more hardware processors 104 remove one or more vertical lines from the denoised image (e.g., denoised image output depicted in FIG. 3C) to obtain one or more regions of interest. FIG. 3D, with reference to FIGS. 1 through 3C, depicts an image document comprising a signature and one or more vertical lines removed by the system 100 of FIG. 1 in accordance with an example embodiment of the present disclosure. More specifically, removal of vertical lines involve removal of lines are the potentially noise (e.g., scanned marks, and the like). At step 210 of the present disclosure, the one or more hardware processors 104 dilate the one or more regions of interest by using a horizontal-shaped kernel to obtain a dilated image. FIG. 3E, with reference to FIGS. 1 through 3D, depicts a dilated image in accordance with an example embodiment of the present disclosure. In the present disclosure, dilating the one or more regions of interest was performed on the one or more regions of interest using the horizontal-shaped kernel (3*1 shape) for ‘n’ iterations (e.g., 5 iterations for the input document of step 202). It is to be understood by a person having ordinary skill in the art that iteration(s) may vary for different type of document(s).

[048] At step 212 of the present disclosure, the one or more hardware processors 104 identify a plurality of contours in the dilated image(s) based on one or more pre-defined thresholds. FIG. 3F, with reference to FIGS. 1 through 3E, depicts an image document comprising a plurality of contours identified by the system 100 of FIG. 1 in accordance with an example embodiment of the present disclosure. In the present disclosure, the one or more pre-defined thresholds comprise but are not limited to, (i) identifying one or more contours that have more than a threshold width and height (wherein in the present disclosure 8 pixels was used as threshold for both width and height), (ii) identifying one or more contours that have width and height less than a threshold percentage of page width and height (wherein in the present disclosure 75% was used as threshold percentage of page width and height, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have pixel density less than most frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height. In the present disclosure, contours that did not satisfy the above (i) and (ii) pre-defined thresholds were excluded. Pre-defined threshold (iv) indicates that text and logos have higher text density than signature. Pre-defined threshold (v) indicates that signature has higher height than average height of text in a page. In other words, pre-defined thresholds (iii), (iv) and (v) were used for identifying contours satisfying these conditions for further processing and analysis by the present disclosure.
[049] At step 214 of the present disclosure, the one or more hardware processors 104 merge at least subset of the plurality of identified contours based on one or more criteria to obtain a merged contour. For instance, the one or more criteria comprise but are not limited to, contours that overlap each other by a determined threshold vertically or horizontally. In other words, contours that overlap each other by a defined threshold vertically or horizontally were merged by the system 100 in the present disclosure to obtain a merged contour. FIG. 3G, with reference to FIGS. 1 through 3F, depicts an image document comprising the

signature and illustrates a merged contour in accordance with an example embodiment of the present disclosure.
[050] At step 216 of the present disclosure, the one or more hardware processors 104 extract one or more features from the merged contour. In the present disclosure, for the given input document depicted in step 202 and FIG. 3A, a total of 33 features were extracted. The features included, but are not limited to, pixel density (total one feature point), horizontal and vertical position of contour in the document (or page) - total 2 feature points, width and height of contour (total 2 feature points), and 28 features used in signature verification to name a few. At step 218 of the present disclosure, the one or more hardware processors 104 determine a presence of the signature in the merged contour using the extracted one or more features. FIG. 3H, with reference to FIGS. 1 through 3G, depicts a presence of signature being determined in the input document based on the above one or more extracted features using the system 100 of FIG. 1 in accordance with an example embodiment of the present disclosure. In the present disclosure, systems and methods and its embodiments have implemented a Random Forest Classifier using Entropy criteria for the feature points in above step to predict/determine whether a contour contains signature or not. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the Random Forest Classifier using Entropy criteria shall not be construed as limiting the scope of the present disclosure and any other classifier can be utilized for determining the presence of signature in a particular contour based on the extracted features.
[051] Further, once the signature is identified/determined, the system 100 continues for further processing to verify a user who is claiming that it is his/her signature. In order to verify that whether the signature belongs to that user or not, the system 100 and its method process and/or execute the following steps. Firstly, the signature identified above in the step 218 is fed as an input to the system 100. Alternatively, the system 100 is capable of directly receiving a signature that the user is claiming to be his/hers. In such scenarios, the system 100 may not be required to perform the steps 202 till 218 and the below steps can be directly

performed on the received signature. FIG. 4 depicts various stages of signature verification process carried out by the method of the FIG. 2 by using the system 100 of FIG. 1 in accordance with an example embodiment of the present disclosure. Image (a) of FIG. 4 depicts an illustrative signature as either identified by the system 100 or received as an input for verification in accordance with an embodiment of the present disclosure. More specifically, the illustrative signature is depicted under header ‘original’ in the (a) of FIG. 4. Firstly, the identified signature (or received input signature) is blurred, by using a blurring technique, to obtain a denoised image. For instance, in the present disclosure, the blurring technique comprises a median blur technique (comprised in the memory 102) which is executed by the system 100 and applied on the received input signature or the identified signature to obtain a denoise image. Image (b) of FIG. 4, depicts the denoised image (gravity and geo center image) upon applying the blurring technique on the signature identified (or the received input signature). Further, the denoised image is converted to a black and white image. For instance, image (d) depicts conversion of the denoised image to a black and white image illustrated under ‘binary’ header, in accordance with an example embodiment. Upon converting to the black and white image, the black and white image is further processed wherein a skeletonisation technique (also referred as skeletonization technique) is applied on the black and white image to obtain a reduced width image. The skeletonization technique herein may be referred as a stroke width reduction technique or a thinning technique. In the reduced width image, which is an output of the skeletonization technique, each pixel is of a 1-pixel stroke width. The reduced width image output is depicted under header ‘thinned’ and depicted in (e) of FIG. 4. In the present disclosure, system 100 and method implemented Zhang Suen thinning technique.
[052] Upon outputting the reduced width image, the system 100 extracts one or more features from the reduced width image. FIG. 5, with reference to FIGS. 1 through 4, illustrates a plurality of blocks depicting various extracted features in accordance with an embodiment of the present disclosure. In the present disclosure the one or more extracted features comprise but are not limited

to, presence of pixels occupied in each of the 16 blocks depicted in FIG. 5, percentage filling of signature in each block, gravity and geo as a feature as depicted in (b) of FIG. 4, angle of line between gravity and line joining center and the like. More specifically, the following features were extracted by the system 100 and method of the present disclosure:
1. Number of pixels with ‘n’ number of neighboring pixels. Here ‘n’ varies from 0 to 8. Total eight feature points.
2. Angle of the line joining center of gravity (mean position of pixels) and geometric center. Total one feature point.
3. Proportion of pixels spread over skeletonized-rotated-sign-image split horizontally and vertically in ‘m’ pieces (e.g., 16 pieces in the present use case) with center of gravity as the center. For example, for an image:
i. the thinned image was rotated so that pixel density is equal horizontally (e.g., refer rotation performed and depicted in (f) of FIG. 4 and the output ‘rotated thinned’ depicted in (g) of FIG. 4.
ii. center of gravity of the image was further identified.
iii. The above image was split into four pieces cutting by vertical axis going 1) from left most pixel to mid-point of left most pixel and center of gravity, 2) from the mid-point of left most pixel and center of gravity to center of gravity, 3) from center of gravity to mid-point of center of gravity to right most pixel and 4) from mid-point of center of gravity to right most pixel.
iv. All the above four images were cut horizontally into four pieces each across center of gravity as split the image vertically. Total 16 feature points.
v. Aspect ratio of signature image. Total one feature point.
vi. Extent: Number of filled pixels divided by total number of pixels including filled and non-filled. Total one feature point.

[053] It is to be understood by a person having ordinary skill in the art or person skill in the art that the above extracted features shall not be construed as limiting the scope of the present disclosure.
[054] Upon extracting the features as depicted/described above, the extracted features were processed in a fully connected neural network (a feedforward neural network) to obtain genuine and forged scores corresponding to a plurality of users. The fully connected neural network is comprised in the memory 102 and executed by the system 100 to obtain genuine and forged scores corresponding to a plurality of users and a genuine and forged score corresponding to the user amongst the genuine and forged scores was identified for verifying the at least one handwritten signature as belonging to the user based on the identified genuine and forged score.
[055] The above description is elaborated with further details for better understanding of the embodiments described herein, by way of an example:
[056] In the present disclosure, the system 100 comprised a neural network (e.g., a feedforward neural network – not shown in FIGS.) that has 28 input neurons, 100 hidden neurons and twice the number of persons whose signature needs to be verified as number of output neurons. Output neurons is equal to twice the number of persons as genuine and forged score for each person is required for whose signature needs to be verified. Thus, each of the output neuron correspond to either genuine score or forged score for each person whose signature needs to be verified. It is to be understood by a person having ordinary skill in the art or person skilled in the art that number of hidden neurons (e.g., 100 hidden neurons as implemented by the present disclosure) may be increased or decreased with the increase or decrease in the number of persons whose signature needs to be verified and shall not be construed as limiting the scope of the present disclosure. The neural network in above step gave genuine and forged score for every person whose signature may be verified. The genuine neuron score and forged neuron score for the person whose signature is to be verified was compared to determined authenticity of the user claiming it. In the present disclosure the system 100 implemented verification of user claiming the signature as: if genuine

score identified in above step is greater than forged score then the signature is predicted Genuine, in all other cases, signature is predicted Forged. FIGS. 6A-6B, with reference to FIGS. 1 through 5, depicts signature verification output for a given input signature in accordance with an example embodiment of the present disclosure. More specifically, FIGS. 6A-6B depict genuine and forged scores for a user’s signature, wherein the score comparison is indicative of whether the signature of the user is genuine and belongs to that user or not. For instance, as depicted in FIG. 6A, the owner identifier is 001 (say user A), for which the genuine score computed by the neural network is 14.87628 that is greater than the forged score which is 6.01998. This is indicative that the user signature is genuine, and the signature belongs to user A. On the other hand, in FIG. 6B, the owner identifier is 003 (say user B), for which the genuine score computed by the neural network is 4.06038 that is less than the forged score which is 8.49057. This is indicative that the user signature is forged, and the signature may not belong to user B.
[057] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[058] Embodiments of the present disclosure implement systems and methods for cognitive signature identification and verification. The present disclosure includes the following technical advancements: 1. Signature Identification:
a. For contour extraction, the present disclosure and its systems and methods implemented one or more dilating technique(s) using cross kernel (mostly horizontal) to separate text lines and yet ensure that different paragraphs of text are not merged.

b. For signature identification, property of signature (pixel density
and height) was used in mostly text oriented scanned image. The
other features helped gain better precision and recall.
c. For high noise document, the present disclosure used high pixel
density and low pixel density to remove potential noise.
d. Since height of contours has been used as one of the key
properties, vertical lines were removed more aggressively than
horizontal lines.
e. Signature contour demonstrates the same properties as signature
and thus signature verification features were used along with
unique features related to Signature Identification to use Machine
Learning algorithm (Random Forest) by the present disclosure.
[059] The above points involve technical advancements different signatures are learnt to identify possible signature by scholars. However, there can be huge number of signatures (as many signatures as the number of humans that have ever signed any document) and thus identifying a signature using that method is not as precise. The size of signature varies from one document to another document which makes it even more complicated. Therefore, method and system of the present disclosure implement approach(es) that mainly use uniformity and density of text to segregate possible contours and then run Random Forest algorithm/classifier to further pin-point signature contour. 2. Signature Verification
a. For signature verification, approximation features such as pixel density around the center of gravity were used by the present disclosure while other conventionally known algorithms for signature verification are mostly based on Convolutional Neural Network (CNN). The advantage of approximation features is that the shape of signature of person may change over time (CNN identifies filter to image and the filter must closely resemble) however such approximation features still remain almost the same.

b. The systems and methods of the present disclosure implement
technique(s) (or algorithms) that eliminate the need for multiple
pass or processing of image thus resulting in faster processing and
computationally consuming less time. On the contrary, CNN
based approaches need for multiple pass or processing of image
and such CNN based approaches requires heavy computation.
c. The systems and methods of the present disclosure implement one
machine learning technique (or algorithm) for multiple persons
whose signature needs to be verified thus eliminating the need of
devising multiple models. On the contrary, conventionally known
systems and methods have devised multiple models for multiple
persons where the maintenance of the multiple devised models
would become an overhead and may lead to over fitting for each
person in the likely scenario that only few genuine and forged
signature for each person are made available.
d. The comparison of the systems and methods of the present
disclosure and conventionally known technique(s) has been
observed has been observed by comparing genuine and forged
score, where both may be very low compared to other person’s
genuine or forged score, yet the difference in the two score allowed
the present disclosure and systems and methods to decide whether
a signature is genuine or forged.
[060] Therefore, the systems and methods of the present disclosure provide technical advancement and advantages over prior art technique(s) or conventionally known techniques in the field of image processing domain as the conventionally known techniques use flavors of Convolutional Neural Network or Machine vision algorithms. Additionally, unlike the conventional approaches wherein implementation of CNN involves creating one output neuron (genuine or forged) or number of neurons equal to number of persons, the systems and methods of present disclosure implements a feedforward neural network that create twice the number of neurons wherein the score of genuine and forged are

compared and the ability of self-learning both inter and intra person variations establishes the method of the present disclosure as an ingenuous approach.
[061] The present disclosure and its systems and methods can be implemented in various applications for example, but are not limited to, forgery signature identification at financial institutions, signature verification on loan payments, account opening forms, organizations individual authenticity, and the like.
[062] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[063] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the

program for use by or in connection with the instruction execution system, apparatus, or device.
[064] The illustrated steps are set out to explain the exemplary
embodiments shown, and it should be anticipated that ongoing technological
development will change the manner in which particular functions are performed.
These examples are presented herein for purposes of illustration, and not
limitation. Further, the boundaries of the functional building blocks have been
arbitrarily defined herein for the convenience of the description. Alternative
boundaries can be defined so long as the specified functions and relationships
thereof are appropriately performed. Alternatives (including equivalents,
extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[065] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile

memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[066] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A processor implemented method, comprising:
obtaining, via one or more hardware processors, an input document comprising a static signature of a user (202), wherein the signature is in a format comprising a color format or a grayscale format;
converting, via the one or more hardware processors, the format of the static signature to a black and white image, by using a binarization technique (204);
applying, via the one or more hardware processors, at least one filter type on the black and white image to obtain a denoised image (206), wherein the at least one filter type is applied on the black and white image based on type of noise comprised in the black and white image;
removing, via the one or more hardware processors, one or more vertical lines from the denoised image to obtain one or more regions of interest (208);
dilating, via the one or more hardware processors, the regions of interest by using a horizontal-shaped kernel to obtain a dilated image (210);
identifying, via the one or more hardware processors, a plurality of contours in the dilated image based on one or more pre-defined thresholds (212);
merging, via the one or more hardware processors, at least subset of the plurality of contours based on one or more criteria to obtain a merged contour (214);
extracting, via the one or more hardware processors, one or more features from the merged contour (216); and
determining, via the one or more hardware processors, a presence of the static signature in the merged contour using the extracted one or more features (218).
2. The processor implemented method of claim 1, wherein the one or more
predefined threshold comprises (i) identifying one or more contours that have
width and height greater than a threshold width and height respectively, (ii)

identifying one or more contours that have width and height less than a threshold percentage of a document width and height respectively, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have a pixel density less than most frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height.
3. The processor implemented method of claim 1, wherein the one or more criteria comprise at least one of two or more contours that overlap with each other by a determined threshold vertically and two or more contours that overlap with each other by a determined threshold horizontally.
4. The processor implemented method of claim 1, wherein the type of noise comprises at least one of Gaussian noise, Salt and pepper noise, Poisson noise, and Speckle noise.
5. The processor implemented method of claim 1, further comprising:
obtaining, at least one signature comprises the determined static signature
or a new signature;
blurring, by using a blurring technique, the at least one signature to obtain a denoised image;
converting the denoised image to a black and white image;
applying a skeletonisation technique on the black and white image to obtain a reduced width image, each pixel in the reduced width image is of a 1-pixel stroke width;
extracting one or more features from the reduced width image;
processing the extracted one or more features in a fully connected neural network to obtain genuine and forged scores corresponding to a plurality of users;
identifying a genuine and forged score corresponding to the user amongst the genuine and forged scores; and
verifying the at least one signature as belonging to a user based on the identified genuine and forged score.

6. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain an input document comprising a static signature of a user, wherein the signature is in a format comprising a color format or a grayscale format;
convert the format of the static signature to a black and white image, by using a binarization technique comprised in the memory (102);
apply at least one filter type on the black and white image to obtain a denoised image, wherein the at least one filter type is comprised in the memory (102) and is applied on the black and white image based on type of noise comprised in the black and white image;
remove one or more vertical lines from the denoised image to obtain one or more regions of interest;
dilate, via the one or more hardware processors, the regions of interest by using a horizontal-shaped kernel comprised in the memory (102) to obtain a dilated image;
identifying a plurality of contours in the dilated image based on one or more pre-defined thresholds;
merging at least subset of the plurality of contours based on one or more criteria to obtain a merged contour;
extracting one or more features from the merged contour; and
determine a presence of the static signature in the merged contour using the extracted one or more features.
7. The system of claim 6, wherein the one or more predefined thresholds
comprise (i) identifying one or more contours that have width and height greater
than a threshold width and height respectively, (ii) identifying one or more

contours that have width and height less than a threshold percentage of a document width and height respectively, (iii) identifying one or more contours that have non-zero area, (iv) identifying one or more contours that have a pixel density less than most frequent pixel density, and (v) identifying one or more contours that have height greater than most frequent contour height.
8. The system of claim 6, wherein the one or more criteria comprise at least one of two or more contours that overlap with each other by a determined threshold vertically and two or more contours that overlap with each other by a determined threshold horizontally.
9. The system of claim 6, wherein the type of noise comprises at least one of Gaussian noise, Salt and pepper noise, Poisson noise, and Speckle noise.
10. The system of claim 6, wherein the one or more hardware processors are further configured by the instructions to:
obtain, at least one signature comprises the determined static signature or a new signature;
blur, by using a blurring technique comprised in the memory (102), the at least one signature to obtain a denoised image;
convert the denoised image to a black and white image;
apply a skeletonisation technique on the black and white image to obtain a reduced width image, each pixel in the reduced width image is of a 1-pixel stroke width;
extract one or more features from the reduced width image;
process the extracted one or more features in a fully connected neural network to obtain genuine and forged scores corresponding to a plurality of users;
identify, via the fully connected neural network, a genuine and forged score corresponding to the user amongst the genuine and forged scores; and

verify, via the fully connected neural network, the at least one signature as belonging to a user based on the identified genuine and forged score.

Documents

Application Documents

# Name Date
1 201921046939-STATEMENT OF UNDERTAKING (FORM 3) [18-11-2019(online)].pdf 2019-11-18
2 201921046939-REQUEST FOR EXAMINATION (FORM-18) [18-11-2019(online)].pdf 2019-11-18
3 201921046939-FORM 18 [18-11-2019(online)].pdf 2019-11-18
4 201921046939-FORM 1 [18-11-2019(online)].pdf 2019-11-18
5 201921046939-FIGURE OF ABSTRACT [18-11-2019(online)].jpg 2019-11-18
6 201921046939-DRAWINGS [18-11-2019(online)].pdf 2019-11-18
7 201921046939-DECLARATION OF INVENTORSHIP (FORM 5) [18-11-2019(online)].pdf 2019-11-18
8 201921046939-COMPLETE SPECIFICATION [18-11-2019(online)].pdf 2019-11-18
9 201921046939-Proof of Right (MANDATORY) [26-11-2019(online)].pdf 2019-11-26
10 201921046939-ORIGINAL UR 6(1A) FORM 1-271119.pdf 2019-11-30
11 201921046939-FORM-26 [24-03-2020(online)].pdf 2020-03-24
12 Abstract1.jpg 2022-05-30
13 201921046939-FER.pdf 2022-06-15
14 201921046939-FER_SER_REPLY [05-08-2022(online)].pdf 2022-08-05
15 201921046939-DRAWING [05-08-2022(online)].pdf 2022-08-05
16 201921046939-COMPLETE SPECIFICATION [05-08-2022(online)].pdf 2022-08-05
17 201921046939-CLAIMS [05-08-2022(online)].pdf 2022-08-05
18 201921046939-ABSTRACT [05-08-2022(online)].pdf 2022-08-05
19 201921046939-US(14)-HearingNotice-(HearingDate-29-04-2024).pdf 2023-12-12
20 201921046939-FORM-26 [22-12-2023(online)].pdf 2023-12-22
21 201921046939-Correspondence to notify the Controller [26-12-2023(online)].pdf 2023-12-26
22 201921046939-US(14)-HearingNotice-(HearingDate-22-10-2024).pdf 2024-10-05
23 201921046939-Correspondence to notify the Controller [18-10-2024(online)].pdf 2024-10-18
24 201921046939-Written submissions and relevant documents [30-10-2024(online)].pdf 2024-10-30
25 201921046939-PatentCertificate25-11-2024.pdf 2024-11-25
26 201921046939-IntimationOfGrant25-11-2024.pdf 2024-11-25

Search Strategy

1 SearchHistory(53)E_15-06-2022.pdf

ERegister / Renewals

3rd: 03 Dec 2024

From 18/11/2021 - To 18/11/2022

4th: 03 Dec 2024

From 18/11/2022 - To 18/11/2023

5th: 03 Dec 2024

From 18/11/2023 - To 18/11/2024

6th: 03 Dec 2024

From 18/11/2024 - To 18/11/2025

7th: 17 Oct 2025

From 18/11/2025 - To 18/11/2026