Sign In to Follow Application
View All Documents & Correspondence

System And Method For Signature Based Text Recognition For Triggers And Alerts Generation

Abstract: The present disclosure relates to a signature-based handwritten text recognition system includes a writing pad, that receives written input from user, a camera captures an image on the writing pad and further communicates with an image storage, that stores the captured image, an image processor connected with the signature identifying engine that identifies a signature and a text identifying engine that identifies a text in the captured image, a signature authenticating engine, that receives identified signature from the signature identifying engine, compares the identified signature with specimen signature stored in signature specimen database, a text authenticating engine that compares an identified text in the captured image with prestored text stored in a text database, and a transmitter, that receives authenticated signature data and text data, and further communicates the text data with a number of output engines. The present disclosure also relates to a method for recognizing text based on signature.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 April 2022
Publication Number
51/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

ARCTERN HEALTHCARE PRIVATE LIMITED
402, Tower-14, Close North Nirwana Country Near Fresco Apartment, Gurgaon

Inventors

1. SINGH, Randeep
61 Vidya Vihar, West Enclave, Pitampura, New Delhi 110034
2. JAIN, Pawan
1103, Panchavati, A Wing, Near S M Shetty School, Powai, Mumbai, Maharashtra 400072

Specification

Claims:1. A signature-based text recognition system (100), wherein the system (100) comprises:
a. a writing pad (102), that receives a written input from a user;
b. a camera (104), that communicates with the writing pad (102) and captures an image on the writing pad (102);
c. an image storage (106), that communicates with the camera (104) and the writing pad (102), receives a captured image and stores the captured image;
d. an image processor (108), that receives the captured image from the image storage (106) and further communicatively coupled with a signature identifying engine (114) that identifies a signature in the captured image and a text identifying engine (110) that identifies a text in the captured image;
e. a signature authenticating engine (112), that receives an identified signature from the signature identifying engine (114), compares the identified signature in the captured image with a specimen signature stored in a signature specimen database (116);
f. a text authenticating engine (126) that receives an identified text from the text identifying engine (110) and compares the identified text in the captured image with a prestored text stored in a text database (128); and
g. a transmitter (118), that receives an authenticated signature data from the signature authenticating engine (112) and a text matched data from the text authenticating engine (126), and further communicates the text matched data with a plurality of output engines.
2. The system (100) as claimed in claim 1, wherein the plurality of output engines includes a triggering engine (120), an IOT server (122) and a display (124).
3. The system (100) as claimed in claim 1, wherein the writing pad (102) is selected from a group comprising a mobile phone, an electronic slate, a tablet, a laptop, a touch display, an electronic drawing board, a laser board or a digital slate.
4. The system (100) as claimed in claim 1, wherein the triggering engine (120) is further connected to a vending machine.
5. The system (100) as claimed in claim 1, wherein the IoT server (122) is a cloud storage.
6. The system (100) as claimed in claim 1, wherein the display engine (124) is a touch screen display that communicates with the writing pad/input engine (102) in real time.
7. The system (100) as claimed in claim 1, wherein the specimen signature includes stroke data, impression data, line quality, words and letter spacing, size consistency, pen lifts, connecting strokes, letter completion, cursive and printed letters, pen pressure, baseline habits, flourished and embellishments, or diacritic placement.
8. The system (100) as claimed in claim 1, wherein the signature authenticating engine (112) further comprises receiving engine (402), a flexible template matching engine (404).
9. The system (100) as claimed in claim 8, wherein the flexible template matching engine (404) segments the identified signature into a plurality of parts.
10. The system (100) as claimed in claim 8, wherein the flexible template matching engine (404) compares the identified signature in the captured image with a specimen signature received from signature specimen storage (112).
11. The system (100) as claimed in claim 8, the flexible template matching engine (404) is a standard engine and operates on template matching that works offline for faster data and online for vector data.
12. A method (200) for recognizing text based on signature, where the method (200) comprising:
scanning (202) a written input by a writing pad (102) and captured image from a camera (104);
processing (204) the captured image received from the camera (104) through an image storage (106) that stores the captured image, and further communicates with a signature identifying engine (114) and a text identifying engine (110);
identifying (206) a signature in the captured image by the signature identifying engine (114) and identifying a text in the captured image by the text identifying engine (110);
authenticating (208) an identified signature in the captured image by the signature authenticating engine (112) by comparing the identified signature in the captured image with a specimen signature received from a signature specimen database (116) and further communicates with a transmitting engine (118);
receiving (210) an authenticated signature data from the signature authenticating engine (112), the transmitter (118) communicates with a text authenticating engine (126) that receives a text data in the captured image from the text identifying engine (110), and the text authenticating engine (126) compares the text data in the captured image received from the text identifying engine (110) with a prestored text received from a text database (128); and
receiving (212) a text matched data from the text authenticating engine (128), the transmitter (118) transmits the text matched data to a plurality of output engines.
13. The method (200) as claimed in claim 12, wherein the plurality of output engines is selected from a group comprises a triggering engine (120), an IOT server (122) and a display (124).
14. The method (200) as claimed in claim 12, wherein the signature authenticating engine (112) terminates further communication with the transmitter (118) when the identified signature in the captured image mismatches with the specimen signature received from the signature specimen database (116).
15. The method (200) as claimed in claim 12, wherein the signature authenticating engine (112) holds in stand-by for 0-5 minutes for a user to provide input signature in the writing pad (102).
16. The method (200) as claimed in claim 12, wherein the signature authenticating engine (112) authenticates and communicates with the transmitter (118) on a matching score of 50%-90% of the identified signature in the captured image with the specimen signature. , Description:TECHNICAL FIELD
The present disclosure relates to a system and method for recognizing handwritten text(s) based on signature. More particularly, the present disclosure relates to the system and the method for recognizing signature-based text of the user followed by triggering the result to display and/or take action.
BACKGROUND
Voice recognition is "the technology by which sounds, words or phrases spoken by humans are converted into electrical signals, and these signals are transformed into coding patterns to which meaning has been assigned. The "sound recognition", focus on the human voice only and is not able to therefore is not used to communicate ideas to each other’s in the immediate surroundings. In the context of a virtual environment, the user would presumably gain the greatest feeling of immersion, or being part of the simulation, when the user could use their most common form of communication, the voice. The difficulty in using voice as an input to a computer system lies in the fundamental differences between human speech and the more traditional forms of computer input. While computer programs are commonly designed to produce a precise and well-defined response upon receiving the proper (and equally precise) input, the human voice and spoken words are anything but precise. Each human voice is different, and identical words can have different meanings if spoken with different inflections or in different contexts. Several approaches have been tried, with varying degrees of success, to overcome these difficulties.
Voice recognition software turns speech into text. This is useful for people with visual impairments and those with physical problems that make typing on a keyboard difficult. Others may use a system because they find talking easier than typing or simply because it is fun. Voice recognition technology is not perfect, however, and is associated with many disadvantages.
Voice recognition software will not always put a speaker’s/person’s words on the screen completely accurately. Programs cannot understand the context of language the way that humans can, leading to errors that are often due to misinterpretation. When a person speaks to audience, audience decodes what the person says and give it a meaning. Voice recognition software can do this but may not be capable of choosing the correct meaning. For example, the voice recognition software cannot always differentiate between homonyms, such as "their" and "there". The voice recognition software may also have problems with slang, technical words and acronyms.
A process is speeded up by computerizing, but it is not necessarily true of voice recognition system. Some programs adapt to voice and speech patterns over time; this may slow down the workflow until the program is up to speed. When a person talks too fast or indistinctly, the text to speech system. Getting used to using a system's commands and speaking punctuation aloud is not always easy. This can affect the flow and speed of speech.
Voice recognition systems can have problems with accents. Even though some may learn to decode the speech over time, a person has to learn to talk consistently and clearly at all times to minimize errors. When a person mumble, talk too fast or run words into each other, the software will not always be able to cope. Programs may also have problems recognizing speech as normal if voice changes, and have a cold, cough, sinus or throat problem.
To get the best out of voice recognition software, the person needs a quiet environment. Systems do not work so well if there is a lot of background noise. They may not be able to differentiate between the speech, other people talking and other ambient noise, leading to transcription mix-ups and errors. This can cause problems if a person works in a busy office or noisy environment. Wearing close-talking microphones or noise-canceling headsets can help the system focus on the speech.
The user who uses voice recognition technology frequently, may experience some physical discomfort and vocal problems. Talking for extended periods can cause hoarseness, dry mouth, muscle fatigue, temporary loss of voice and vocal strain. The fact that the person is not talking naturally may make this worse and the person may need to learn how to protect their voice if the person will use a program regularly.
SUMMARY OF THE INVENTION
In one aspect of the disclosure a signature-based text recognition system is provided.
The system includes a writing pad, that receives a written input from a user, a camera, that communicates with the writing pad and captures an image on the writing pad, an image storage, that communicates with the camera and the writing pad, receives a captured image and stores the captured image, an image processor, that receives the captured image from the image storage and further communicatively coupled with a signature identifying engine that identifies a signature in the captured image and a text identifying engine that identifies a text in the captured image, a signature authenticating engine, that receives an identified signature from the signature identifying engine, compares the identified signature in the captured image with a specimen signature stored in a signature specimen database, a text authenticating engine that receives an identified text from the text identifying engine and compares the identified text in the captured image with a prestored text stored in a text database, and a transmitter, that receives an authenticated signature data from the signature authenticating engine and a text matched data from the text authenticating engine, and further communicates the text matched data with a plurality of output engines. The plurality of output engines includes a triggering engine, an IOT server and a display. The writing pad is selected from a group comprising a mobile phone, an electronic slate, a tablet, a laptop, a touch display, an electronic drawing board, a laser board or a digital slate. The triggering engine is further connected to a vending machine. The IoT server is a cloud storage. The display engine is a touch screen display that communicates with the writing pad/input engine in real time. The specimen signature includes stroke data, impression data, line quality, words and letter spacing, size consistency, pen lifts, connecting strokes, letter completion, cursive and printed letters, pen pressure, baseline habits, flourished and embellishments, or diacritic placement. The signature authenticating engine further comprises receiving engine, a flexible template matching engine. The flexible template matching engine segments the identified signature into a plurality of parts. The flexible template matching engine compares the identified signature in the captured image with a specimen signature received from signature specimen storage. The flexible template matching engine is a standard engine and operates on template matching that works offline for raster data and online for vector data.
In another aspect of the disclosure a method for recognizing text based on signature is provided. The method involves scanning a written input by a writing pad and captured image from a camera, processing the captured image received from the camera through an image storage that stores the captured image, and further communicates with a signature identifying engine and a text identifying engine, identifying a signature in the captured image by the signature identifying engine and identifying a text in the captured image by the text identifying engine, authenticating an identified signature in the captured image by the signature authenticating engine by comparing the identified signature in the captured image with a specimen signature received from a signature specimen database and further communicates with a transmitting engine, receiving an authenticated signature data from the signature authenticating engine, the transmitter communicates with a text authenticating engine that receives a text data in the captured image from the text identifying engine, and the text authenticating engine compares the text data in the captured image received from the text identifying engine with a prestored text received from a text database and receiving a text matched data from the text authenticating engine, the transmitter transmits the text matched data to a plurality of output engines. The plurality of output engines is selected from a group comprises a triggering engine, an IOT server and a display. The signature authenticating engine terminates further communication with the transmitter when the identified signature in the captured image mismatches with the specimen signature received from the signature specimen database. The signature authenticating engine holds in stand-by for 0-5 minutes for a user to provide input signature in the writing pad. The signature authenticating engine authenticates and communicates with the transmitter on a matching score of 50%-90% of the identified signature in the captured image with the specimen signature.
BRIEF DESCRIPITION OF DRAWINGS
The drawing/s mentioned herein disclose exemplary embodiments of the claimed invention. Other objects, features, and advantages of the present invention will be apparent from the following description when read with reference to the accompanying drawing.
FIG. 1 illustrates a block diagram of a signature-based text recognition system, according to one embodiment herein;
FIG. 2 illustrates a flowchart that depicts a working of the signature-based text recognizing system of FIG. 1, according to another embodiment herein;
FIG. 3 depicts a schematic illustration of an example communications and/or computing system implemented according to an exemplary embodiment;
FIG. 4A illustrates a schematic illustration of signature authenticating engine, according to an embodiment herein; and
FIG. 4B illustrates a flowchart that depicts a working of a signature authenticating engine, according to another embodiment herein.
To facilitate understanding, like reference numerals have been used, where possible to designate like elements common to the figures.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
This section is intended to provide explanation and description of various possible embodiments of the present invention. The embodiments used herein, and the various features and advantageous details thereof are explained more fully with reference to non-limiting embodiments illustrated in the accompanying drawing/s and detailed in the following description. The examples used herein are intended only to facilitate understanding of ways in which the embodiments may be practiced and to enable the person skilled in the art to practice the embodiments used herein. Also, the examples/embodiments described herein should not be construed as limiting the scope of the embodiments herein.
In view of the existing problems with the existing voice the present system and the present method to overcome the limitations related with the speech to text system. The term “handwriting recognition system 100”, herein referred to as “System 100” and other such terms indicate a system for recognizing the handwriting of the user in real time and are used interchangeably across the context.
Further, the “doctor”, “physician”, “technician” and “user” means a person who operates the system 100 and the terms are used interchangeably in their respective context.
The term "processing element", "processor" and such other terms indicates image processor and are interchangeably used across the context.
As mentioned, there is a need for the development of a handwriting-based text recognition system. The embodiment herein overcome the limitations of the prior arts by providing a handwriting and signature-based text recognition system and method of recognizing the handwriting of a user in real time.
FIG. 1 illustrates a block diagram of a signature-based text recognition system 100, according to one embodiment herein.
The signature-based handwriting recognizing system includes a writing pad 102, a camera 104, an image storage 106, an image processor 108, a text identifying engine 110, a signature authenticating engine 112, signature identifying engine 114, a signature specimen database 116, a transmitter 118, a triggering engine 120, an IOT server 122, a display 124, a text authenticating engine 126 and a text database 128.
In the signature-based handwriting recognizing system 100, the writing pad 102 is communicatively coupled to the camera 104 and the camera 104 captures the image in the writing pad 102. In another embodiment, the camera 104 is mounted on a writing pen. In another embodiment, the writing pad 102 includes, but not limited to, a mobile phone, an electronic slate, a tablet, a laptop, a touch display, an electronic drawing board, a laser board or a digital slate. In another embodiment, the writing pad 102 is attached with an electronic pen and camera.
The camera 104 is communicatively coupled to the image storage 106, that stores a captured image from the camera 104 and the writing pad 102. In another embodiment, the captured image is a 2D image. In another embodiment, the captured image is a 3D volume. In another embodiment, the captured image is a topography. In another embodiment, the camera 104 is an X-ray camera. In another embodiment, the camera 104 is an infrared camera. In another embodiment, the camera 104 is a night vision camera. In another embodiment, the camera 104 is a video camera. In another embodiment, the camera 104 is a photo camera.
In an aspect, the captured image from the camera 102 and a scanned image from the writing pad 102 is merged in the image storage 106.
The image processor 108 receives the captured image from the image storage 106 and identifies a text and a signature in the captured image, and further communicates the captured image to the text identifying engine 110 and the signature identifying engine 114.
The signature identifying engine 114 receives the captured image from the image processor and identifies a signature in the captured image and further communicates an identified signature to the signature authenticating engine 112.
The signature authenticating engine 112 receives the identified signature from the signature identifying engine 114 and authenticates the identified signature in the captured image with a specimen signature stored in the signature specimen database 116 and further communicates with the transmitter 118.
The transmitter 118 receives an authenticated signature data from the signature authenticating engine 112 and communicates with the text authenticating engine 126 that receives a text data from the text identifying engine 110.
The text authenticating engine 126 receives the text data in the captured image and compares a received text data with prestored text data stored in the text database 128.
The transmitter 118 receives the text matched data from the text authenticating engine 126 and communicates the text matched data to a number of output devices (herein referred to an output engine for a single component).
In an embodiment, the number of output devices includes, but not limited to, triggering engine 120, the IOT server 122 and the display 124. In another embodiment, the display 124 communicates with the user in real time. In another embodiment, the display communicates with the user in batch. In an embodiment, the IoT server 122 includes, but not limited to, a cloud storage, a disk storage, a flash storage.
FIG. 2 illustrates a flowchart that depicts a working of the signature-based text recognizing system 100 of Figure 1, according to another embodiment herein. The method 200 for recognizing handwriting is provided.
At step 202, the written inputs are scanned by the writing pad 102 and captured image from the camera 104. In an embodiment, the written inputs are given to the writing pad 102 by the electronic pen.
At step 204, the captured image received from the camera 104 through the image storage 106 that stores the captured image is processed by the image processor 108, and further communicates with the signature identifying engine 114 and the text identifying engine 110.
At step 206, the signature identifying engine 114 identifies the signature in the captured image and the text identifying engine 110 identifies the text in the captured image.
At step 208, the signature identifying engine 114 authenticates the identified signature in the captured image by comparing the identified signature in the captured image with the specimen signature received from the signature specimen database 116 and further communicates with the transmitting engine 118.
At step 210, the transmitter 118 receives the authenticated signature data from the signature authenticating engine 112 and further communicates with the text authenticating engine 126 that receives the text data in the captured image from the text identifying engine 110 and the text authenticating engine compares the text data in the captured image with the prestored text data received from the text database 128.
At step 212, the transmitter 118 receives a text matched data from the text authenticating engine 128 and transmits the text matched data to the number of output engines.. In an embodiment, the number of output engines units includes triggering engine 120, the IOT server 122 and the display 124 that communicates with the user.
In an aspect, the triggering engine 120 is connected to a vending machine. In another embodiment, the triggering engine triggers the vending machine to dispense a product.
FIG. 3 depicts a schematic illustration of an example communications and/or computing system 300 implemented according to an exemplary embodiment.
The system 300 can include one or more processing element 108, for example, a central processing unit (CPU). The signature authenticating engine 112 being software is embedded in the processor 108 (hardware component).
According to an exemplary embodiment, the CPU is coupled via a bus 306 to a secondary memory 310. The secondary memory 310 includes, in an exemplary embodiment, a memory portion/removable storage unit 322 that can include instructions that when executed by the processing element 108 can perform the methods described in more detail herein. The secondary memory 310 may be further used, according to an exemplary embodiment, as a temporary storage element for the processing element 108, and/or other uses, as the case may be. The memory may include, in an exemplary embodiment, volatile memory such as, e.g., but not limited to, a random-access memory (RAM), and/or a non-volatile memory (NVM), such as, e.g., but not limited to, Flash memory, etc., according to an exemplary embodiment. Secondary memory 310 may further include, in an exemplary embodiment, a hard disk drive 312 includes an application data, etc., according to an exemplary embodiment. The processing element 108 may be coupled to an input 102, in one exemplary embodiment. The processing element 108 may be further coupled with a database 308 and/or other main memory 330, according to an exemplary embodiment. Database system and/or storage device 308, in an example embodiment, can be used for the purpose of holding a copy of the method executed in accordance with the disclosed technique, according to an exemplary embodiment. Database 308 may further include, e.g., but may not be limited to, a storage portion, which may include and/or include sub-portions of an application, and/or data referenced by the application, in an exemplary embodiment. In one embodiment, the promotion system can be configured to execute the methods described herein with respect of the remaining figures, according to an exemplary embodiment. The exemplary method, system, and/or computer products, may be hardwired or, presented as a series of instructions to be executed by the processing element 108.
The principles disclosed herein can be implemented as hardware, firmware, or any combination thereof. The machine may be implemented on a computer platform 300 having hardware such as, e.g., but not limited to, a processing unit (“CPU”) 108, a memory 308, and/or input interfaces 102, output interfaces (not shown), as well as other components not shown for simplicity, but as would be well known to those skilled in the relevant art, according to an exemplary embodiment. The computer platform may also include, in an exemplary embodiment, an operating system and/or microinstruction code. The various processes and/or functions described herein may be either part of the microinstruction code and/or part of the application, and/or any combination thereof, which may be executed by a CPU 108, whether or not such computer and/or processor is explicitly shown, according to an exemplary embodiment. In addition, various other peripheral units may be connected, and/or coupled, to the computer platform such as, e.g., but not limited to, an additional memory unit 318 and/or removable memory unit 318, an additional data storage unit 314 and/or removable storage unit 314, and a printing unit, and/or display 124, and/or other input 102, output, communication 326 and/or networking components 326, etc., according to an exemplary embodiment.
References to “one embodiment,” “an embodiment,” “example embodiment,” “various embodiments,” “exemplary embodiment,” “exemplary embodiments,” etc., may indicate that the embodiment(s) so described may include a particular feature, structure, or characteristic, but not every embodiment necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one embodiment,” or “in an exemplary embodiment,” do not necessarily refer to the same embodiment, although they may.
In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used, according to an exemplary embodiment. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other, according to an exemplary embodiment. “Coupled” may mean that two or more elements are in direct physical or electrical contact, according to an exemplary embodiment. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other, according to an exemplary embodiment.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices, according to an exemplary embodiment.
In a comparable manner, the term “processor” can refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that can be stored in registers and/or memory, according to an exemplary embodiment. A “computing platform” can include one or more processors, according to an exemplary embodiment. In one embodiment, a processor can include an embedded processor, and/or another subsystem processor, and/or a system on a chip (SOC), device, according to an exemplary embodiment.
Embodiments may include apparatuses for performing the operations herein, according to an exemplary embodiment.
In another exemplary embodiment, the methods may be directed to a computer product include a computer readable medium having control logic stored therein. The control logic, when executed by the processor 108, may cause the processor 108 to perform features as described herein, according to an exemplary embodiment.
In yet another embodiment, implementation may be primarily in hardware using, for example, but not limited to, hardware components such as, e.g., but not limited to, application specific integrated circuits (ASICs), or one or more state machines, etc., according to an exemplary embodiment. Implementation of the hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s), according to an exemplary embodiment.
In another exemplary embodiment, as noted, implementation can be primarily in firmware.
In yet another exemplary embodiment, implementation can combine any of, e.g., but not limited to, hardware, firmware, etc.
Exemplary embodiments may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by a computing platform to perform the methods described herein. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of non-transitory propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), secondary memory 310, main memory/storage 308, and others, according to an exemplary embodiment.
The exemplary embodiments make reference to wired, and/or wireless networks, according to an exemplary embodiment. Wired networks can include any of a wide variety of well-known means, or configuration to, for coupling voice and data communications devices together, according to an exemplary embodiment. Similarly, any of various exemplary wireless network technologies may be used to implement the embodiments discussed, according to an exemplary embodiment. Specific details of wireless and/or wired communications networks are well known and are not included, as will be apparent to those of ordinary skill in the relevant art, according to an exemplary embodiment.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions, according to exemplary embodiments. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, according to an exemplary embodiment. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure, according to an exemplary embodiment.
FIG. 4A illustrates a schematic illustration of signature authenticating engine 112, according to an embodiment herein.
The signature authenticating engine 112 further includes a receiving engine 402, and a flexible template matching engine 404.
The receiving engine 402 receives the identified signature in the captured image from the signature identifying engine 114 and further communicates the identified signature in the captured image to the flexible template matching engine 404.
The flexible template matching engine 404 compares the identified signature in the captured image with the specimen signature received from the signature specimen database 116 and assign a matching score for the identified signature in the captured image.
The flexible template matching engine 404 further communicates with the transmitter 118 when the matching score is more than 50%.
In an exemplary embodiment, the flexible template matching engine 404 segments the signature data into a number of parts. In an embodiment, the number of parts includes, but not limited to, strokes, cursive, size consistency, word space and letter space, line quality and line completion and pressure.
In another exemplary embodiment, the signature data includes, but not limited to, stroke data, impression data, line quality, words and letter spacing, size consistency, pen lifts, connecting strokes, letter completion, cursive and printed letters, pen pressure, baseline habits, flourished and embellishments, or diacritic placement.
In an exemplary embodiment, the flexible template matching engine 404 further includes a stroke classifier, a cursive classifier, a size consistency classifier, a word letter and space classifier, a line quality and line completion classifier or a pressure classifier. In another exemplary embodiment, the stroke classifier configured to identifies a stroke in the segmented image. In another exemplary embodiment, the cursive classifier configured to identifies a cursive letters and words in the segmented image. In another exemplary embodiment, the size consistency classifier configured to identifies a size consistency of text in the segmented image. In another exemplary embodiment, the word space and line space classifier configured to identifies word spaces and line spaces in the segmented image. In another exemplary embodiment, the line quality and line completion classifier 416 configured to identifies a quality of line and competition of line in the segmented image. In another exemplary embodiment, the pressure classifier is configured to identifies an impression of pen or pressure of writing device in the segmented image. In another exemplary embodiment, the of prestored data includes, prestored stroke data, prestored impression data, prestored line quality, prestored words spacing and letter spacing, prestored size consistency, prestored pen lifts, prestored connecting strokes, prestored letter completion, prestored cursive and printed letters, prestored pen pressure, prestored baseline habits, prestored flourished and embellishments, or prestored diacritic placement.
In an aspect, the flexible template matching engine 404 terminates further communication with the transmitter 118 when the identified signature in the captured image mismatches with the specimen signature received from the signature specimen storage 112 or the matching score is less than 50%.
In another aspect, the flexible template matching engine 404 authenticates and communicates with the transmitter 118 on the matching score ranges from 50%-90% of the identified signature in the captured image with the specimen signature data.
In another embodiment, the flexible template matching engine 404 is a standard engine and operates on template matching that works offline for faster data and online for vector data.
In another aspect, the signature authenticating engine 112 holds in stand-by for 0-5 minutes for the user to provide input signature in the writing pad 102.
FIG. 4B illustrates a flowchart that depicts a working of the signature authenticating engine 112 of system 100, according to another embodiment herein.
The signature authenticating engine 112 communicates with the signature specimen database 116, compares the identified signature in the captured image with the specimen signature stored in the signature specimen database 116 and triggers the transmitter 118 to transmit the text data to the display 124 if the identified signature in the captured image matches with the specimen signature received from the signature specimen data as shown in the FIG. 4B
While the disclosure has been presented with respect to certain specific embodiments, it will be appreciated that many modifications and changes may be made by those skilled in the art without departing from the spirit and scope of the disclosure. It is intended, therefore, by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the disclosure.

Documents

Application Documents

# Name Date
1 202211020388-STATEMENT OF UNDERTAKING (FORM 3) [05-04-2022(online)].pdf 2022-04-05
2 202211020388-FORM FOR SMALL ENTITY(FORM-28) [05-04-2022(online)].pdf 2022-04-05
3 202211020388-FORM 1 [05-04-2022(online)].pdf 2022-04-05
4 202211020388-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-04-2022(online)].pdf 2022-04-05
5 202211020388-DRAWINGS [05-04-2022(online)].pdf 2022-04-05
6 202211020388-DECLARATION OF INVENTORSHIP (FORM 5) [05-04-2022(online)].pdf 2022-04-05
7 202211020388-COMPLETE SPECIFICATION [05-04-2022(online)].pdf 2022-04-05
8 202211020388-FORM-26 [05-07-2022(online)].pdf 2022-07-05
9 202211020388-Proof of Right [04-10-2022(online)].pdf 2022-10-04