Abstract: The present disclosure provides a method for converting handwritten characters into machine-readable instructions. The method includes receiving from a user an image of handwritten characters organized within a grid layout, detecting bounding boxes within the grid layout to isolate individual characters, removing background noise from the image to enhance character recognition, categorizing the isolated characters based on their ASCII equivalents, removing borders from the isolated characters to refine the image, embedding the characters into a numerical format for digital processing, and transforming the numerical format into machine instructions.
Description:Brief Description of the Drawings
Generally, the present disclosure relates to character recognition technologies. Particularly, the present disclosure relates to a method for converting handwritten characters into machine-readable instructions.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
The present disclosure relates to methods employed in character recognition technology, an area that has seen significant development over the years. Character recognition technology primarily involves the conversion of images of text into machine-encoded text, which is a fundamental technology used in various applications ranging from automated data entry systems to real-time translation devices. The methods for character recognition are continuously evolving with advancements in image processing, artificial intelligence, and machine learning.
One commonly implemented method for character recognition is Optical Character Recognition (OCR) technology. OCR technology involves scanning of text document images, analyzing the images, and translating the characters in the images into character codes that are used in data processing. This technology enables the extraction of text from a scanned document or a photo and converts it into a machine-readable form. However, OCR systems often struggle with text that is handwritten or presented in a non-standard format, which results in errors in character recognition and requires subsequent human intervention for corrections.
Moreover, another method used in character recognition is Intelligent Character Recognition (ICR) technology. This method is an advanced version of OCR that allows fonts and different styles of handwriting to be learned by a computer during processing to improve accuracy and recognition levels over time. ICR technology is highly beneficial for interpreting and digitizing handwritten documents. Nevertheless, the effectiveness of ICR is often compromised by the presence of noise in the background of the images, irregularities in handwritten text, and the variability in human handwriting.
Further challenges arise with both OCR and ICR technologies, which include issues related to the detection of individual characters in cluttered layouts or images where multiple characters are closely spaced. Such challenges are particularly prevalent when characters are organized within a grid layout where each character must be identified and isolated for accurate recognition. Additionally, these methods often involve complex preprocessing steps like noise reduction and border removal to enhance the clarity and quality of the text for better processing and conversion into machine-readable formats.
In light of the above discussion, there exists an urgent need for solutions that overcome the problems associated with conventional systems and/or techniques for converting handwritten characters into machine-readable instructions.
Summary
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
In an aspect, the present disclosure provides a method for converting handwritten characters into machine-readable instructions. The method includes the steps of receiving an image of handwritten characters organized within a grid layout, detecting bounding boxes to isolate individual characters, removing background noise to enhance character recognition, categorizing the isolated characters based on their ASCII equivalents, refining the image by removing borders from the isolated characters, embedding the characters into a numerical format, and transforming the numerical format into machine instructions suitable for directing the operations of a Computer Numerical Control (CNC) machine.
Furthermore, the present disclosure employs a convolutional neural network for detecting bounding boxes and applies a thresholding technique for removing background noise. The method enhances the clarity and usability of the processed characters for machine instruction generation.
In another aspect, the present disclosure provides a system for converting handwritten characters into machine-readable instructions. This system comprises an input module configured to receive the image, a detection module for isolating characters, a background noise removal module, a categorization module, a border removal module, an embedding module, and a transformation module. The transformation module utilizes a G-code generator to convert the numerical format into G-code instructions, while the embedding module employs vectorization techniques.
Moreover, the system's detection module is equipped with dynamic scaling to accommodate various character sizes, enhancing the flexibility and accuracy of character recognition. The system further ensures that characters are effectively transformed into a format that can be used to create outputs resembling human-written text.
Field of the Invention
The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a method (100) for converting handwritten characters into machine-readable instructions, in accordance with the embodiments of the present disclosure.
FIG. 2 illustrates a block diagram of a system (200) for converting handwritten characters into machine-readable instructions, in accordance with the embodiments of the present disclosure.
FIG. 3 illustrates a flowchart that represents the step-by-step process of converting handwriting into a G-code instruction, in accordance with the embodiments of the present disclosure.
FIG. 4 illustrates an exemplary grid layout showcasing an input interface for collection of the English alphabet in uppercase and lowercase letters, digits from 0 to 9, and a selection of common punctuation marks and symbols, in accordance with the embodiments of the present disclosure.
FIG. 5 illustrates a side-by-side comparison of an input and output associated with the handwriting to G-Code conversion process, in accordance with the embodiments of the present disclosure.
Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a method (100) for converting handwritten characters into machine-readable instructions, in accordance with the embodiments of the present disclosure. The method (100) for converting handwritten characters into machine-readable instructions comprises:
In step (102), receiving from a user an image of handwritten characters organized within a grid layout. The method (100) begins by receiving an image, which contains handwritten characters systematically placed in a grid layout. This initial step is crucial for setting the foundation for subsequent character isolation and processing. The image is typically captured via optical devices like scanners or digital cameras which are designed to handle varying levels of detail and lighting conditions to maintain the integrity of the handwritten characters. In step (104), detecting bounding boxes within the grid layout to isolate individual characters. In an embodiment, the detection module (204) is equipped with dynamic scaling to accommodate various character sizes within the grid layout. The step (104) involves the use of algorithms to detect and draw bounding boxes around each character within the grid layout. By isolating each character, the system ensures that the processing of characters can be done individually, which increases the accuracy of the recognition processes. In step (106), removing background noise from the image to enhance character recognition. In another embodiment, removing background noise includes applying a thresholding technique to distinguish the characters from the background. This process involves the application of image processing techniques such as filtering and thresholding to reduce or eliminate background noise that could interfere with the recognition of handwritten characters. These techniques enhance the clarity of the characters, thereby improving the accuracy of the system in recognizing and interpreting each character.
In step (108), categorizing the isolated characters based on their ASCII equivalents. Once characters are isolated and noise is reduced, the method involves categorizing each character based on ASCII equivalents. This step converts the visual representation of the characters into a standardized digital format, which is crucial for subsequent digital processing and transformation into machine-readable instructions. In step (110), removing borders from the isolated characters to refine the image. Following the categorization, the method (100) includes a process to remove any remaining borders or edges around the isolated characters. This refinement step is essential to prepare the characters for accurate digital embedding and to ensure that no extraneous marks are carried over into the final digital format.
In step (112), embedding the characters into a numerical format for digital processing. After refining the images, the characters are embedded into a numerical format. This embedding involves converting the characters into a format that can be readily processed digitally, setting the stage for their transformation into executable machine instructions. In step (114), transforming the numerical format into machine instructions. In an embodiment, the transformation module (214) utilizes a G-code generator for converting the numerical format into G-code instructions. This step is the culmination of the method where the processed characters are converted into a set of instructions that can be executed by machines, thereby allowing for the automated interaction with other digital systems and machinery, such as CNC machines.
In an embodiment, the machine instructions produced by the method (100) are particularly suited for directing the operations of a Computer Numerical Control (CNC) machine. After the method (100) has embedded the characters into a numerical format for digital processing, the resulting numerical format undergoes transformation into machine instructions that are highly compatible with CNC machines. The transformation module (214) employs specialized algorithms to ensure that the machine instructions can effectively communicate with CNC machinery, enabling precision in machining operations based on the processed handwritten characters. This feature is crucial for applications in manufacturing where designs or instructions are often handwritten and need accurate translation into machine commands without the need for manual input. The incorporation of CNC-compatible instructions enables automated machining tasks, thereby enhancing productivity and reducing errors associated with manual data entry.
In another embodiment, the method (100) incorporates the use of a convolutional neural network to detect bounding boxes within the grid layout during the detection phase. This enhancement enables more accurate and efficient isolation of individual characters by leveraging the sophisticated pattern recognition capabilities of convolutional neural networks. The detection module (204) equipped with this neural network technology allows for dynamic scaling and enhanced adaptability to various handwriting styles and character sizes, significantly improving the detection accuracy compared to traditional bounding box detection methods. The application of convolutional neural networks in this step ensures that each character is accurately framed, with minimal inclusion of surrounding grid lines or other characters, which is pivotal for the subsequent steps of noise removal and character categorization.
In a further embodiment, the method (100) for removing background noise from the image incorporates a thresholding technique to distinctly separate the characters from the background. This technique involves adjusting the pixel values within the image such that pixels representing the characters are enhanced while those representing the background are minimized. The background noise removal module (206) applies this technique to ensure that any shadows, grid lines, or color variations that do not form part of the actual characters are effectively suppressed. By enhancing the contrast between the characters and the background, the method improves the clarity and readability of the characters before they are categorized based on their ASCII equivalents. This specific technique of noise removal is crucial for maintaining high levels of accuracy in character recognition, particularly in environments where the handwritten characters may be faint or obscured by background patterns or textures.
The system for converting handwritten characters into machine-readable instructions comprises modules for receiving images, detecting characters, removing noise, categorizing, refining images, embedding in numerical formats, and transforming these into instructions suitable for digital and mechanical systems.
The term "input module" as used throughout the present disclosure relates to a component configured to receive an image containing handwritten characters organized within a grid layout. The input module captures images via devices such as scanners or cameras that are capable of providing high-resolution images to ensure that the grid layout and individual characters are discernible. This module serves as the initial point of entry for the images into the system, setting the stage for subsequent processing stages.
The term "detection module" as used throughout the present disclosure relates to a component designed to detect bounding boxes within the grid layout of the received image. The detection module employs algorithms that accurately outline each character within the grid, isolating them for further analysis. This isolation is crucial as it allows for precise character handling in the later stages of processing.
The term "background noise removal module" as used throughout the present disclosure refers to a component tasked with enhancing character recognition by removing extraneous visual noise from the image. The background noise removal module applies advanced image processing techniques to clarify the image, ensuring that the characters stand out clearly against the background, which facilitates more accurate recognition and categorization.
The term "categorization module" as used throughout the present disclosure pertains to a component configured to categorize isolated characters based on their ASCII equivalents. The categorization module analyzes each character’s visual form and converts it into a corresponding ASCII value, a step essential for digitizing the handwritten characters into a format that can be further processed.
The term "border removal module" as used throughout the present disclosure describes a component responsible for refining the image by removing borders around the isolated characters. The border removal module ensures that any residual graphical elements that do not contribute to the actual character are eliminated, thus preparing the characters for accurate embedding into numerical formats.
The term "embedding module" as used throughout the present disclosure refers to a component configured to embed the characters into a numerical format for digital processing. The embedding module converts the ASCII characters into a digital code that can be used in computing environments, facilitating the transformation into machine-readable instructions.
The term "transformation module" as used throughout the present disclosure relates to a component configured to transform the numerical format into machine instructions. The transformation module takes the digital codes produced by the embedding module and converts them into a set of executable instructions for machines, enabling the use of the processed data in various digital and mechanical systems.
FIG. 2 illustrates a block diagram of a system (200) for converting handwritten characters into machine-readable instructions, in accordance with the embodiments of the present disclosure. Said system (200) is comprised of various modules, each configured for specific functions in the process. An input module (202) is illustrated, configured to receive an image of handwritten characters arranged within a grid layout. Said input module (202) may include an optical scanner or a digital camera interface. Adjacent to the input module (202), a detection module (204) is depicted. Said detection module (204) is configured to detect bounding boxes within the grid layout for the purpose of isolating individual characters. Said detection module (204) is equipped with dynamic scaling to accommodate characters of varying sizes. To the right of the detection module (204), a background noise removal module (206) is shown, which is configured to enhance character recognition by removing noise from the image. This enhancement of the image quality facilitates more accurate processing of the handwritten characters. Below the input module (202), a categorization module (208) is represented, which is configured to categorize the isolated characters based on their ASCII equivalents. Following this, a border removal module (210) is illustrated and is configured to refine the image by removing any borders from the isolated characters. Additionally, at the bottom right, an embedding module (212) and a transformation module (214) are depicted. Said embedding module (212) is configured to embed the characters into a numerical format for digital processing, potentially utilizing vectorization techniques. Said transformation module (214) is configured to transform the numerical format into machine instructions, which may include generating G-code instructions for use with CNC machines.
In an embodiment, the input module (202) of the system (200) includes an optical scanner or a digital camera interface specifically designed for capturing images of handwritten characters. This configuration enables the input module (202) to accommodate a wide range of document types and handwriting styles by providing high-resolution imaging capabilities that ensure detailed and accurate capture of the grid layouts in which the characters are organized. The use of optical scanners allows for precise digitization of paper-based inputs, while digital camera interfaces facilitate the capture of handwritten notes directly from digital screens or through photographs. This flexibility greatly enhances the system's usability in diverse environments, from office settings to mobile applications, where users may need to convert handwritten notes into digital formats.
In another embodiment, the embedding module (212) of the system (200) utilizes a vectorization technique to convert characters into a scalable vector format. This technique involves the conversion of raster images of characters into vector graphics, which are resolution-independent and scalable without loss of quality. The use of vectorization by the embedding module (212) ensures that the characters can be resized and manipulated for various digital applications while maintaining high visual fidelity. This capability is particularly advantageous in industries such as graphic design and digital content creation, where scalability and quality retention are critical.
In a further embodiment, the transformation module (214) of the system (200) incorporates a G-code generator for converting the numerical format into G-code instructions. G-code is a language used by CNC machines to control their operations, including movements and tool functions. By employing a G-code generator, the transformation module (214) enables the system to directly interface with manufacturing and fabrication equipment, facilitating the automated creation of physical objects from handwritten designs. This feature is particularly beneficial for sectors involving rapid prototyping, custom manufacturing, and precision engineering, where the conversion of hand-drawn concepts into machine-operable commands adds significant value.
In an embodiment, the method (100) further comprises a step where the extracted characters are mapped onto a digital document to create an output that closely resembles human-written text. This step is executed subsequent to the embedding of characters into a numerical format, where the embedded characters are formatted and aligned to mimic the appearance and style of the original handwritten input. This functionality enables the production of digital documents that retain the personal touch and uniqueness of handwritten notes, making it ideal for applications requiring a human aesthetic, such as personalized correspondence or artistic presentations.
In another embodiment, the detection module (204) of the system (200) is equipped with dynamic scaling capabilities to effectively accommodate various character sizes within the grid layout. This enhancement allows the detection module (204) to adjust its parameters dynamically based on the size of the characters detected in the input image, ensuring accurate isolation and processing regardless of the character scale or font size. Such a capability is crucial for applications involving diverse handwriting styles and character densities, enhancing the system's versatility and effectiveness in processing handwritten inputs from multiple sources and contexts.
In an embodiment, present disclosure provides a Handwriting to CNC system is designed to convert handwritten instructions into CNC machine code, streamlining the process of programming CNC machines. Traditionally, creating instructions for CNC machines requires manual input of G-code, which is time-consuming and requires specialized knowledge. The system of present disclosure overcome these limitations by providing a more intuitive and automated method for generating CNC instructions. The proposed system begins by accepting an image of handwritten text, which can be uploaded by the user or captured through a camera (of smartphone or any other imaging device). The captured image undergoes various pre-processing steps to isolate the handwritten text and enhance clarity thereof. The preprocessing step can selected from background removal, border elimination, and image refinement. Such preprocessing steps improves efficiency of character recognition, eliminating any elements that could interfere with this process. Once the image is prepared, the system employs Optical Character Recognition (OCR) technology to identify and convert the handwritten characters into digital text. This digital text represents the instructions for the CNC machine. Following OCR, the system may perform additional text processing steps, such as error correction and formatting, to ensure that the text aligns with the expected CNC instruction format.
In another embodiment, the CNC G-code generation module utilizes processed text and converts it into G-code, which is the programming language used by CNC machines. This conversion is crucial as it translates the user’s handwritten instructions into precise commands that the CNC machine can execute. The present system is user-friendly, offering a natural and intuitive interface that does not require prior knowledge of G-code programming. Further the present disclosure improve efficiency by reducing the time required to program CNC machines and simplifies the workflow by automating the translation of handwritten instructions into machine-readable code. Additionally, the system enhances accuracy by minimizing the potential for errors that are common in manual G-code programming. Moreover, the system offers flexibility in recognizing different handwriting styles and has the potential to support multiple languages.
FIG. 3 illustrates a flowchart that represents the step-by-step process of converting handwriting into a G-code instruction, in accordance with the embodiments of the present disclosure. The process begins with a prompt for the user to write in grid format, followed by the input of the handwritten grid image into the system. Once the image is inputted, a bounding box detector identifies the individual characters within the grid. Subsequently, a background remover eliminates any non-essential elements, ensuring that only the characters are processed further. The next step is image categorization involves classifying the detected characters into recognizable groups or types. After categorization, any superfluous borders are removed from around the characters in the border removal step. The characters are then embedded, which involves processing them in such a way that they are prepared for the final conversion to G-Code.
FIG. 4 illustrates an exemplary grid layout showcasing an input interface for collection of the English alphabet in uppercase and lowercase letters, digits from 0 to 9, and a selection of common punctuation marks and symbols, in accordance with the embodiments of the present disclosure. Each character is contained within its own bounding box, suggesting that this format is likely used as a reference or a template for the handwriting-to-G-Code conversion process depicted above image. The grid layout enables uniformity and consistency in character size and spacing, which is essential for accurate detection and conversion in automated processes.
FIG. 5 illustrates a side-by-side comparison of an input and output associated with the handwriting to G-Code conversion process, in accordance with the embodiments of the present disclosure. On the left, the input showcases three lines of text written in English. On the right side, the output presents the same lines of text after undergoing the transformation process. The characters appear to be spaced out, with variations in character width and strokes.
Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
Claims
I/We Claims
A method (100) for converting handwritten characters into machine-readable instructions, comprising:
receiving from a user an image of handwritten characters organized within a grid layout;
detecting bounding boxes within the grid layout to isolate individual characters;
removing background noise from the image to enhance character recognition;
categorizing the isolated characters based on their ASCII equivalents;
removing borders from the isolated characters to refine the image;
embedding the characters into a numerical format for digital processing; and
transforming the numerical format into machine instructions.
The method (100) of claim 1, wherein the machine instructions are suitable for directing the operations of a Computer Numerical Control (CNC) machine.
The method (100) of claim 1, further comprising using a convolutional neural network for detecting bounding boxes within the grid layout.
The method (100) of claim 1, wherein removing background noise includes applying a thresholding technique to distinguish the characters from the background.
A system (200) for converting handwritten characters into machine-readable instructions, comprising:
an input module (202) configured to receive an image of handwritten characters organized within a grid layout;
a detection module (204) configured to detect bounding boxes within the grid layout to isolate individual characters;
a background noise removal module (206) configured to enhance character recognition by removing noise from the image;
a categorization module (208) configured to categorize the isolated characters based on their ASCII equivalents;
a border removal module (210) configured to refine the image by removing borders from the isolated characters; and
an embedding module (212) configured to embed the characters into a numerical format for digital processing;
a transformation module (214) configured to transform the numerical format into machine instructions.
The system (200) of claim 5, wherein the input module (202) includes an optical scanner or a digital camera interface for capturing images of the handwritten characters.
The system (200) of claim 5, wherein the embedding module (212) utilizes a vectorization technique to convert characters into a scalable vector format.
The system (200) of claim 5, wherein the transformation module (214) utilizes a G-code generator for converting the numerical format into G-code instruction.
The method (100) of claim 1, further comprising the step of mapping the extracted characters onto a digital document to create an output resembling human-written text.
The system (200) of claim 5, wherein the detection module (204) is equipped with dynamic scaling to accommodate various character sizes within the grid layout.
METHOD FOR CONVERTING HANDWRITTEN CHARACTERS INTO MACHINE-READABLE INSTRUCTIONS
The present disclosure provides a method for converting handwritten characters into machine-readable instructions. The method includes receiving from a user an image of handwritten characters organized within a grid layout, detecting bounding boxes within the grid layout to isolate individual characters, removing background noise from the image to enhance character recognition, categorizing the isolated characters based on their ASCII equivalents, removing borders from the isolated characters to refine the image, embedding the characters into a numerical format for digital processing, and transforming the numerical format into machine instructions.
, Claims:I/We Claims
A method (100) for converting handwritten characters into machine-readable instructions, comprising:
receiving from a user an image of handwritten characters organized within a grid layout;
detecting bounding boxes within the grid layout to isolate individual characters;
removing background noise from the image to enhance character recognition;
categorizing the isolated characters based on their ASCII equivalents;
removing borders from the isolated characters to refine the image;
embedding the characters into a numerical format for digital processing; and
transforming the numerical format into machine instructions.
The method (100) of claim 1, wherein the machine instructions are suitable for directing the operations of a Computer Numerical Control (CNC) machine.
The method (100) of claim 1, further comprising using a convolutional neural network for detecting bounding boxes within the grid layout.
The method (100) of claim 1, wherein removing background noise includes applying a thresholding technique to distinguish the characters from the background.
A system (200) for converting handwritten characters into machine-readable instructions, comprising:
an input module (202) configured to receive an image of handwritten characters organized within a grid layout;
a detection module (204) configured to detect bounding boxes within the grid layout to isolate individual characters;
a background noise removal module (206) configured to enhance character recognition by removing noise from the image;
a categorization module (208) configured to categorize the isolated characters based on their ASCII equivalents;
a border removal module (210) configured to refine the image by removing borders from the isolated characters; and
an embedding module (212) configured to embed the characters into a numerical format for digital processing;
a transformation module (214) configured to transform the numerical format into machine instructions.
The system (200) of claim 5, wherein the input module (202) includes an optical scanner or a digital camera interface for capturing images of the handwritten characters.
The system (200) of claim 5, wherein the embedding module (212) utilizes a vectorization technique to convert characters into a scalable vector format.
The system (200) of claim 5, wherein the transformation module (214) utilizes a G-code generator for converting the numerical format into G-code instruction.
The method (100) of claim 1, further comprising the step of mapping the extracted characters onto a digital document to create an output resembling human-written text.
The system (200) of claim 5, wherein the detection module (204) is equipped with dynamic scaling to accommodate various character sizes within the grid layout.
METHOD FOR CONVERTING HANDWRITTEN CHARACTERS INTO MACHINE-READABLE INSTRUCTIONS
| # | Name | Date |
|---|---|---|
| 1 | 202421033236-OTHERS [26-04-2024(online)].pdf | 2024-04-26 |
| 2 | 202421033236-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 3 | 202421033236-FORM 1 [26-04-2024(online)].pdf | 2024-04-26 |
| 4 | 202421033236-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf | 2024-04-26 |
| 5 | 202421033236-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf | 2024-04-26 |
| 6 | 202421033236-DRAWINGS [26-04-2024(online)].pdf | 2024-04-26 |
| 7 | 202421033236-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202421033236-COMPLETE SPECIFICATION [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202421033236-FORM-9 [07-05-2024(online)].pdf | 2024-05-07 |
| 10 | 202421033236-FORM 18 [08-05-2024(online)].pdf | 2024-05-08 |
| 11 | 202421033236-FORM-26 [15-05-2024(online)].pdf | 2024-05-15 |
| 12 | 202421033236-FORM 3 [13-06-2024(online)].pdf | 2024-06-13 |
| 13 | 202421033236-RELEVANT DOCUMENTS [01-10-2024(online)].pdf | 2024-10-01 |
| 14 | 202421033236-POA [01-10-2024(online)].pdf | 2024-10-01 |
| 15 | 202421033236-FORM 13 [01-10-2024(online)].pdf | 2024-10-01 |
| 16 | 202421033236-FER.pdf | 2025-07-14 |
| 17 | 202421033236-FORM-8 [17-09-2025(online)].pdf | 2025-09-17 |
| 18 | 202421033236-FER_SER_REPLY [17-09-2025(online)].pdf | 2025-09-17 |
| 19 | 202421033236-DRAWING [17-09-2025(online)].pdf | 2025-09-17 |
| 20 | 202421033236-CORRESPONDENCE [17-09-2025(online)].pdf | 2025-09-17 |
| 21 | 202421033236-COMPLETE SPECIFICATION [17-09-2025(online)].pdf | 2025-09-17 |
| 22 | 202421033236-CLAIMS [17-09-2025(online)].pdf | 2025-09-17 |
| 23 | 202421033236-ABSTRACT [17-09-2025(online)].pdf | 2025-09-17 |
| 1 | SearchHistory-202421033236E_10-06-2024.pdf |