Sign In to Follow Application
View All Documents & Correspondence

A Deep Neural Network Based Framework System For Handwritten Text Pattern Recognition Using Structured Feature Extraction

Abstract: A DEEP NEURAL NETWORK-BASED FRAMEWORK SYSTEM FOR HANDWRITTEN TEXT PATTERN RECOGNITION USING STRUCTURED FEATURE EXTRACTION The invention discloses a deep neural network-based system and method for handwritten text recognition using structured feature extraction. The system comprises an image acquisition module, a preprocessing module, a feature extraction module, a classification module, and a training and optimization module. Handwritten text images are preprocessed through grayscale conversion, normalization, resizing, and noise reduction. The preprocessed images are processed by a deep neural network comprising multiple dense layers and dropout layers, which automatically extract handwriting features such as strokes, shapes, and character structures. A softmax classification layer outputs the recognized character or digit. The system is trained using labeled handwriting datasets with backpropagation and loss optimization, ensuring adaptability to diverse handwriting styles and languages. The invention overcomes the limitations of traditional OCR and rule-based systems by providing automated feature extraction, robustness against handwriting variability, and scalability for multilingual applications. It is suitable for education, banking, postal, and archival systems.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 September 2025
Publication Number
43/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. CH. APARNA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. RAJCHANDAR K
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The present invention relates to the field of artificial intelligence and image processing. More particularly, it pertains to a deep learning-based system and method for handwritten text recognition, employing neural networks with structured feature extraction to automatically identify handwritten characters and patterns with high accuracy and adaptability.
BACKGROUND OF THE INVENTION
Handwritten text recognition is a difficult task because of the inherent variability of personal writing styles, irregular character spacing, uneven alignment, and noisy nature of scanned or photographed images. Rule-based systems and traditional OCR systems have limitations in adjusting to such variations, and in most cases, they end up with low recognition accuracy. Moreover, such systems usually use hand-crafted features, which are not flexible and deep enough to generalize to various handwriting patterns and datasets. Hence, an urgent requirement is a powerful and scalable method that may automatically learn and extract significant features of handwritten text images and effectively recognize the character or pattern even in a difficult and unstructured environment.
US20120114245A1: The present invention relates to a method and system for online script independent recognition of handwritten sub-word unit and words. More particularly the present invention relates to a system and method which enables online recognition of script independent sub-word unit and words by recognizing the written individual strokes prior to recognition of sub-word unit and words. The present invention provides an easy and natural to use method for handwritten sub-word unit and word recognition, wherein the application can be deployed on the existing communication means.
US8014603B2: A method of characterizing a word image includes traversing the word image stepwise with a window to provide a plurality of window images. For each of the plurality of window images, the method includes splitting the window image to provide a plurality of cells. A feature, such as a gradient direction histogram, is extracted from each of the plurality of cells. The word image can then be characterized based on the features extracted from the plurality of window images.
Handwritten text recognition is challenging due to the irregularity of personal writing styles, variations in spacing, distortions, and noise in scanned or photographed images. Traditional rule-based and optical character recognition systems depend on handcrafted features and rigid templates that cannot generalize effectively, resulting in poor accuracy.
The present invention solves these challenges by introducing a deep neural network framework capable of automatically extracting features from handwritten text images. It reduces dependence on handcrafted rules, adapts to diverse handwriting styles and languages, and provides scalable, accurate recognition even in noisy environments.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention discloses a deep neural network-based framework for recognizing handwritten text patterns through structured feature extraction. The system begins by preprocessing handwritten images through grayscale conversion, normalization, noise reduction, and resizing, ensuring input consistency.
Processed images are fed into a deep neural network comprising multiple dense layers and dropout layers. The network automatically learns features such as strokes, curves, shapes, and character structures. Dropout layers prevent overfitting, making the system adaptive to new handwriting samples. A softmax classification layer at the final stage assigns probabilities to possible characters or digits.
The framework is trained on large labeled datasets of handwriting and can generalize to new samples, including unfamiliar writing styles and languages. Backpropagation and common loss functions optimize network weights to maximize accuracy.
The invention is highly scalable and applicable across various domains including education, banking, postal services, digitization of historical records, and multilingual handwriting recognition.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The system suggested will identify patterns of handwritten text with the help of a deep neural network. It begins by pre-processing the input images by converting them to grayscale, normalizing, and resizing them to make them clean and consistent to analyze. These processed images further serve as input to a deep neural network with a small number of dense layers and dropout layers that are learned and extract useful features of the handwriting. The network automatically detects patterns such as strokes, shapes, and character structures without manually designing features. Finally, the model has a softmax layer that performs the classification of the text and indicates the correct digit or character. This system is trained with labeled data and can be adapted to other handwriting styles, which reason why it can be used to read handwritten material in other languages and forms.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The system suggested will identify patterns of handwritten text with the help of a deep neural network. It begins by pre-processing the input images by converting them to grayscale, normalizing, and resizing them to make them clean and consistent to analyze. These processed images further serve as input to a deep neural network with a small number of dense layers and dropout layers that are learned and extract useful features of the handwriting. The network automatically detects patterns such as strokes, shapes, and character structures without manually designing features. Finally, the model has a softmax layer that performs the classification of the text and indicates the correct digit or character. This system is trained with labeled data and can be adapted to other handwriting styles, which reason why it can be used to read handwritten material in other languages and forms.
Key Components of the Implementation:
The handwritten text recognition system suggested is a collection of several parts that constitute its backbone and integrate to give perfect answers. It starts with pre-processing the images, making the input into grayscale, resizing, and cleaning up the data from the noise. This processed image would be further fed into a deep neural network of multiple dense (fully connected) layers and dropout layers to help the model learn useful features and avoid overfitting. The final classification layer is placed with a softmax activation to make a prediction on the correct digit or character based on learned features. Backpropagation is used to update weights in the model using common loss functions, and the model is trained on labelled handwriting datasets. The measures on the system's performance during the process are based on accuracy and loss, ensuring that in the event of a new handwriting style, the system performs as expected. This set of elements gives a highly scalable solution to handwritten text recognition.
The present invention is considered to be novel since its structure is based on the deep neural network (DNN) to discover handwritten text patterns by learning without handcrafted features or time-consuming preprocessing rules. The beauty of this system is that it learns by itself the valuable visual characteristics of the raw handwritten images into dense neural layers, which was not the case with the traditional OCR systems due to their fixed templates or hand-crafted characteristics. To achieve generalization, the architecture is built with dropout layers, and is trained in an end-to-end way on labeled data and hence adaptive to variation in handwriting style and language. The approach provides a highly simplistic yet effective framework which is seamless, accurate and better than conventional methods used in the identification of handwritten texts.
The invention provides a system designed to accurately recognize handwritten text patterns using deep neural networks.
The process begins with the acquisition of handwritten text images through scanning or digital photography. These images often contain noise due to background interference, poor lighting, or irregular strokes. Preprocessing is applied to mitigate these issues. Conversion to grayscale simplifies image complexity, while normalization standardizes intensity values. Images are resized to a uniform scale, and noise reduction techniques are applied to improve clarity.
The preprocessed images are then fed into a deep neural network architecture. The system includes dense layers that capture complex relationships in pixel data, enabling it to learn visual structures inherent to handwriting. The model extracts high-level features automatically, avoiding the need for handcrafted descriptors.
Dropout layers are incorporated to regularize the model, reducing the likelihood of overfitting and ensuring the network can generalize across different handwriting styles. This enhances robustness, allowing the model to perform well on new, unseen handwriting data.
The final classification layer employs a softmax activation function, which assigns probabilities to different character classes. The output corresponds to the most probable digit or character in the input image.
Training involves feeding labeled handwriting datasets into the network. Through backpropagation, weights are adjusted based on loss functions such as categorical cross-entropy, gradually improving prediction accuracy. Performance metrics such as accuracy and error rates are monitored to assess the model during training and validation.
The invention is adaptable to multiple languages. Since the model learns generic visual patterns, it can be retrained or fine-tuned for datasets representing different alphabets or scripts. This makes it useful for global applications.
Another advantage of the invention is its scalability. The architecture can be implemented on high-performance computing systems for large-scale datasets or optimized for edge devices for lightweight real-time recognition.
The invention addresses deficiencies in traditional OCR systems, which rely on rigid templates and are unable to handle variations in handwriting. By contrast, the proposed framework learns directly from data, enabling it to capture strokes, irregularities, and contextual variations naturally present in handwriting.
The invention can be integrated into applications such as banking for automated cheque processing, postal services for address recognition, academic institutions for examination digitization, and archiving systems for digitizing handwritten manuscripts.
Its modular design allows integration with additional layers such as convolutional neural networks for feature refinement or recurrent neural networks for sequence modeling. The system can further incorporate language models to improve recognition accuracy by providing context-aware predictions.
Overall, the invention represents a significant advancement in handwritten text recognition, providing a flexible, adaptive, and high-accuracy solution.
Best Method of Working
The best method of working involves implementing the system as a deep neural network with multiple dense and dropout layers. Handwritten text images are preprocessed by converting them to grayscale, resizing them to uniform dimensions, and applying noise reduction filters. These images are then fed into the neural network, which automatically extracts features and classifies them through a softmax output layer. The system is trained using large, labeled handwriting datasets with backpropagation optimization. The trained model is deployed on computing infrastructure, with options for real-time recognition or batch processing. Dropout layers ensure adaptability across handwriting variations, and the system can be fine-tuned for different languages and scripts.

, Claims:1. A deep neural network-based system for handwritten text recognition comprising:
an image acquisition module configured to capture handwritten text images;
a preprocessing module configured to convert images to grayscale, normalize, resize, and reduce noise;
a feature extraction module comprising multiple dense layers and dropout layers configured to learn handwriting features;
a classification module with a softmax activation function configured to output recognized characters or digits; and
a training and optimization module configured to train the system using labeled handwriting datasets with backpropagation,
wherein the system provides accurate and adaptive handwritten text recognition.
2. The system as claimed in claim 1, wherein the preprocessing module standardizes input images for consistent analysis.
3. The system as claimed in claim 1, wherein the feature extraction module automatically identifies strokes, shapes, and character structures without handcrafted features.
4. The system as claimed in claim 1, wherein the dropout layers prevent overfitting and enhance generalization to unseen handwriting.
5. The system as claimed in claim 1, wherein the system is adaptable to recognition of multiple languages and scripts.
6. A method for handwritten text recognition using a deep neural network, the method comprising:
acquiring handwritten text images;
preprocessing the images by grayscale conversion, normalization, resizing, and noise reduction;
extracting features using dense layers and dropout layers of a neural network;
classifying the features into characters or digits using a softmax activation function; and
training the network using labeled handwriting datasets with backpropagation,
wherein the method ensures adaptive recognition across handwriting variations.
7. The method as claimed in claim 6, wherein preprocessing reduces image noise and irregularities in handwriting samples.
8. The method as claimed in claim 6, wherein feature extraction is performed automatically without manual feature engineering.
9. The method as claimed in claim 6, wherein the system generalizes across diverse handwriting styles by incorporating dropout regularization.
10. The method as claimed in claim 6, wherein the method supports multilingual handwriting recognition by retraining on language-specific datasets.

Documents

Application Documents

# Name Date
1 202541090190-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2025(online)].pdf 2025-09-22
2 202541090190-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-09-2025(online)].pdf 2025-09-22
3 202541090190-POWER OF AUTHORITY [22-09-2025(online)].pdf 2025-09-22
4 202541090190-FORM-9 [22-09-2025(online)].pdf 2025-09-22
5 202541090190-FORM FOR SMALL ENTITY(FORM-28) [22-09-2025(online)].pdf 2025-09-22
6 202541090190-FORM 1 [22-09-2025(online)].pdf 2025-09-22
7 202541090190-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-09-2025(online)].pdf 2025-09-22
8 202541090190-EVIDENCE FOR REGISTRATION UNDER SSI [22-09-2025(online)].pdf 2025-09-22
9 202541090190-EDUCATIONAL INSTITUTION(S) [22-09-2025(online)].pdf 2025-09-22
10 202541090190-DRAWINGS [22-09-2025(online)].pdf 2025-09-22
11 202541090190-DECLARATION OF INVENTORSHIP (FORM 5) [22-09-2025(online)].pdf 2025-09-22
12 202541090190-COMPLETE SPECIFICATION [22-09-2025(online)].pdf 2025-09-22