Sign In to Follow Application
View All Documents & Correspondence

A Method Of Extracting Data And Reviewing An Engineering Drawing

Abstract: A method of extracting data from an engineering drawing and reviewing an engineering drawing is disclosed. The method may include receiving a first image of an engineering drawing. The method may further include receiving a second image of an engineering drawing. The method may further include identifying one or more text elements from the first image. The method may further include classifying an angle of projection associated with both the first and second image. The method may further include comparing both the first and second image to match similar views, based on feature segmentation. The method may further include mapping dimensions from the first image onto the second image upon comparison. The method may further include recording the one or more text elements and the dimensions in a database.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 October 2021
Publication Number
16/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

L&T TECHNOLOGY SERVICES LIMITED,
DLF IT SEZ Park, 2nd Floor – Block 3, Mount Poonamallee Road, Ramapuram, Chennai, INDIA

Inventors

1. ARJUN SINGH JADOUN
Jadoun Residency, New Pitambara Colony, Near Jhalkari Bai College, Gwalior – 474003, Madhya Pradesh, India

Specification

DESC:DESCRIPTION
Technical Field
[001] This disclosure relates generally to data extraction, and more particularly to a method of extracting data and reviewing an engineering drawing.

BACKGROUND
[002] A two-dimensional engineering drawing contains a lot of information related to drawing entities and non-drawing entities such as different parts, features, text element, tittle block, notes, revision table, and bill of material (BOM). Moving towards the digital era, it might be desirable to store the previous knowledge in digital formats. For example, the previous drawings may exist in hard copy format or sometimes a version of software which is now obsolete. As such, it might be desirable to recreate those previous drawings in new software.
[003] Once the drawings are recreated, it is necessary to review the recreated drawings, for example to verify if all the aspects of the previous drawings have been captured accurately in the recreated drawing. However, this is a time-consuming, tedious, and critical task in terms of human capability. Some object detection techniques seek to identify all target objects in a target image and determine their categories and positions in order to achieve comprehension. However, these techniques, most of which are inspired by computer vision and deep learning methods, perform poorly in the detection of small, dense objects, and even fail to detect objects with random geometric transformations.

SUMMARY OF THE INVENTION
[004] In an embodiment, a method of extracting data and reviewing an engineering drawing is disclosed. The method may include receiving a first image and a second image of an engineering drawing. The method may further include identifying one or more text elements from the first image. The one or more text elements may comprise a title block, notes, revision table and bill of materials (BOM). The method may further include classifying an angle of projection associated with both the first and second image. The method may further include comparing both the first and second image to match similar views, based on feature segmentation. The method may further include mapping dimensions from the first image onto the second image upon comparison, and recording the one or more text elements and the dimensions in a database.

BRIEF DESCRIPTION OF THE DRAWINGS
[005] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[001] FIG. 1 is a block diagram of an environment for extracting data and reviewing an engineering drawing, in accordance with an embodiment of the present disclosure.
[002] FIG. 2 is a flowchart of a method of extracting data and reviewing an engineering drawing, in accordance with an embodiment of the present disclosure.
[003] FIG. 3 is another flowchart of a method of extracting data and reviewing an engineering drawing, in accordance with an embodiment of the present disclosure.
[004] FIG. 4A illustrates a snapshot of an example engineering drawing (first image) from which data is to be extracted, in accordance with an embodiment.
[005] FIG. 4B illustrates a title block section extracted from the example engineering drawing of FIG. 4A, in accordance with an embodiment.
[006] FIG. 4C illustrates a projection information section extracted from the example engineering drawing of FIG. 4A, in accordance with an embodiment.
[007] FIGS. 5A-5B show data tables representing information about the engineering drawing, in accordance with an embodiment.
[008] FIG. 6 illustrates a first set of views of an example drawing pertaining to a third angle projection and a second set of views pertaining to a first angle projection, in accordance with an embodiment.
[009] FIG. 7 illustrates a process flow diagram of a process of extracting information from a first image, in accordance with an embodiment.
[010] FIG. 8 illustrates a table storing an output of data extraction, in accordance with an embodiment.

DETAILED DESCRIPTION OF THE DRAWINGS
[011] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Additional illustrative embodiments are listed below.
[012] The present invention relates to methods, apparatuses/systems, and software for comparing two drawings to generate a compare drawing that includes options for highlighting added and deleted graphic objects, as well as unchanged graphic objects, and comparing one drawing to many or many drawings to many drawings, regardless of whether the drawings are similar. The method helps in reducing the computational work and minimizing human errors by automated comparison analysis using computer vision and helps in identifying any shape along with the dimensions or any notes attached to it.
[013] Referring now to FIG. 1, a block diagram of an environment for extracting data and reviewing an engineering drawing is illustrated, in accordance with an embodiment of the disclosure. As shown in the FIG. 1, the environment 100 may include a system 102, a database 104, an external device 106, and a communication network 108. The system 102 may be communicatively coupled to the database 104 and the external device 106, via the communication network 108. In some embodiments, the system 102 may include an AI model 110, as part of an application stored in memory of the system 102.
[014] The system 102 may include suitable logic, circuitry, interfaces, and/or code that may be configured to extract data from an engineering drawing (first image) and review a new engineering drawing (second image) using the AI model 110. The AI model 110 may be trained to extract text values, and map dimensions to corresponding parts of the new drawing. The AI model 110, once trained, may be deployable for automatically fetch information related to the engineering drawing. By way of an example, the system 102 may be implemented as a plurality of distributed cloud-based resources by use of several technologies that are well known to those skilled in the art.
[015] The database 104 may include suitable logic, circuitry, interfaces, and/or code that may be configured to store data received, utilized and processed by the system 102. Although in FIG. 1, the system 102 and the database 104 are shown as two separate entities, this disclosure is not so limited. Accordingly, in some embodiments, the entire functionality of the database 104 may be included in the system 102, without a deviation from scope of the disclosure. The external device 106 may include suitable logic, circuitry, interfaces, and/or code that may be configured to deploy the AI model 110, as part of an application engine. The AI model 110 may be deployed on the external device 106 once the AI model 110 is trained on the system 102. The functionalities of the external device 106 may be implemented in portable devices, such as a high-speed computing device, and/or non-portable devices, such as a server.
[016] The communication network 108 may include a communication medium through which the system 102, the database 104, and the external device 106 may communicate with each other. Examples of the communication network 108 may include, but are not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN). Various devices in the environment 100 may be configured to connect to the communication network 108, in accordance with various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity(Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, and Bluetooth (BT) communication protocols.
[017] The AI model 110 may be referred to as a computational network or a system of artificial neurons, where each Neural Network (NN) layer of the AI model 110 includes artificial neurons as nodes. Outputs of all the nodes in the AI model 110 may be coupled to at least one node of preceding or succeeding NN layer(s) of the AI model 110. Similarly, inputs of all the nodes in the AI model 110 may be coupled to at least one node of preceding or succeeding NN layer(s) of the AI model 110. Node(s) in a final layer of the AI model 110 may receive inputs from at least one previous layer. A number of NN layers and a number of nodes in each NN layer may be determined from hyperparameters of the AI model 110. Such hyperparameters may be set before or while training the AI model 110 on a training dataset of images. Each node in the AI model 110 may correspond to a mathematical function with a set of parameters, tunable while the AI model 110 is trained. These parameters may include, for example, a weight parameter, a regularization parameter, and the like. Each node may use the mathematical function to compute an output based on one or more inputs from nodes in other layer(s) (e.g., previous layer(s)) of the AI model 110.
[018] The AI model 110 may include electronic data, such as, for example, a software program, code of the software program, libraries, applications, scripts, or other logic/instructions for execution by a processing device, such as the system 102 and the external device 106. Additionally, or alternatively, the AI model 110 may be implemented using hardware, such as a processor, a microprocessor (e.g., to perform or control performance of one or more operations), a field-programmable gate array (FPGA), or an application-specific integrated circuit (ASIC). In some embodiments, the AI model 110 may be implemented using a combination of both the hardware and the software program.
[019] The system 102 may be configured to receive a first image and a second image. Each of the first image and the second image may be a scanned image or a vector image of an engineering drawing including one or more views of an object. For example, the first image may be a reference image obtained from a customer. The second image is a recreated image (from a corresponding 3D model, for example created using “Solidworks” or any other software application). It may be further noted that second image and the corresponding 3D model may then be delivered to the customer. The system 102 may be further configured identify one or more text elements from the first image. The one or more text elements may include a title block, notes, revision table and bill of materials (BOM). The system 102 may be further configured to, for each of the first image and the second image, classify an angle of projection associated with each of the one or more views into one or more categories. Further, the system 102 may be configured to compare to match similar views from the first image and the second image, based on feature segmentation, and upon comparison, map dimensions from the first image onto the second image; and record the one or more text elements and the dimensions in a database.
[020] Referring to FIG 2, a flowchart of a method 200 of extracting data and reviewing an engineering drawing is illustrated, in accordance with an embodiment. At step 202, data may be extracted from the input image (first image) using Open-source computer vision (Open CV), tesseract, deep leaning model, and an Efficient Accurate Scene Text detector (EAST) model. At step 204 attributes may be defined to the shapes as identified in the scanned image (first image). At step 206, an object may be created by defining certain attributes to the extracted shape from the inputted image. At step 208 the object may be searched in the deliverable file using modified object detection algorithm or detection model. The model may be trained to perform as per user’s requirement.
[021] As will be appreciated by those skilled in the art, the Open CV is a library of programming functions mainly aimed at real-time computer vision. OpenCV may help to process an image and apply various functions like resizing image, pixel manipulations, object detection, and many more. In an embodiment, Tesseract is an open-source text recognition engine that is available under the Apache 2.0 license. It may be used to extract the printed text from images, recognize text from a large document, or it can also be used to recognize text from an image of a single text line.
[022] In an embodiment, deep learning approaches like neural networks may be used to combine the tasks of localizing text (text detection) in an image along with understanding what the text is (text recognition). The Efficient Accurate Scene Text detector (EAST) algorithm uses a single neural network to predict a word or line-level text. It may detect text in arbitrary orientation with quadrilateral shapes.
[023] Referring to FIG. 3, a flowchart of a method 300 of extracting data and reviewing an engineering drawing is illustrated, in accordance with an embodiment. At step 302, a first image comprising of one or more views of an object may be received as an input. The first image may be a scanned image of an engineering drawing. At step 304 a second image comprising of one or more views of an object may be received for comparison with the first image. The second image may be a vector image of the engineering drawing. At step 306 the one or more text elements may be identified from the first image. The one or more text elements may include a title block, notes, revision table and bill of materials (BOM). At step 308 an angle of projection associated with each of the one or more views of both the images (first and second) may be classified into one or more categories. At step 310 similar views from the first image and the second image, may be compared to match on the basis of feature segmentation. At step 312 dimensions from the first image may be mapped onto the second image upon comparison. At step 314 the one or more text elements and the dimensions may be recorded in a database. This method 300 is further explained using example scenarios via FIGS. 4-8.
[024] In an embodiment, identifying the one or more text elements from the first view image may include mapping the identified one or more text elements with a repository to determine standardized text elements corresponding to the one or more text elements. The one or more text elements are identified based on at least one of an OpenCV model, a Tesseract model, and a Deep Leaning model. In an embodiment, the one or more categories comprise a front view, a top view, a section view, a detailed view, an auxiliary view, and an orthographic view. Furthermore, mapping the dimensions may include converting values or metric units of the dimensions. In an embodiment, the second image may include identified one or more shape attributes and the one or more text elements associated with the first image.
[025] Referring to FIG. 4A, a snapshot of an example engineering drawing 400A (first image) from which data is to be extracted is illustrated, in accordance with an embodiment. The engineering drawing 400A may act as input image. The engineering drawing 400A may be in a “pdf” format, or a “jpg” format. In other words, the engineering drawing 400A may be a scanned image (or simply an image taken from mobile camera) of a hardcopy format engineering drawing. As such, the input image might be noisy and blurred. It may be desirable to extract data and recreate a second image, based on the first image, on a new software application, which is easily readable.
[026] As will be appreciated, it may be challenging to extract information from a blurry and noisy image using simple OCR. As such, deep learning methods may be to extract the text from tables, notes, BOM, revision tables, lines, and other references.
[027] To this end, different components from engineering drawing 400A (input image) may be segregated and extracted. For example, upon segregating, a title block section 400B, as shown in FIG. 4B, may be extracted. Further, upon segregation, a projection information section 400C as shown in FIG. 4C, may be extracted. As will be understood, similar shapes may exist in most engineering drawings, although the shapes of the extracted sections may differ by some degree from drawing to drawing.
[028] Further, based on the above extracted sections, information about the engineering drawing 400A may be extracted. For example, referring now to FIG. 5A-5B, data tables 500A and 500B representing information about the engineering drawing 400 are illustrated. As shown in FIG. 5A, the information about “Angle of Projection”, “Size”, “Designed by”, and “Date” may be extracted. Such information may be stored in separate cells in a single row.
[029] Further, information about “Created by”, “revision tables” and “materials” may be extracted. However, in some cases, the information may not exist in standard format. For example, material information mentioned in a title block of the engineering drawing 400A may be “PEI”. However, the corresponding material information mentioned in noted associated with the engineering drawing 400A may be “Polyamideemide”. It should be noted that in such case, higher preference may be given to notes as compared to title block, since notes are special comments, which are generally followed. Therefore, the information extracted may be stored in the Table 500B, as shown in FIG. 5B.
[030] It may be further noted that in some cases, some extracted terms may be in non-standard format. For example, the term “Material” may exist as “mtl.” or “MATL.”. To this end, the AI model may be trained to interpret the non-standard format and further store the information upon converting the non-standard format information to standard format information. As such, the term “mtl.” may be stored as “material” as shown in the FIG. 5B.
[031] In some embodiments, additionally, when the engineering drawing 400A is recreated, i.e. a second image is created, the standard information may be used instead on non-standard information which was used in the first image. Therefore, in the title block in the second image, the title block “matl.” may be overwritten (replaced) with “Material”. Priority may be given to recent revisions, then to notes, and then to title block.
[032] In some embodiments, angle of projection may be checked and views on the drawing sheet may be classified. For example, the views may be classified as one of a front view, a top view, a section view, a detailed view, an auxiliary view, an orthographic view, etc. As such, number of views may be calculated. Further, the type (classification) of each view may be determined. It should be noted that the views may be identified into front view, top view, right side view, etc., based on the angle of projection and dimensions. For example, in some cases, the name of the view or a symbol suggestive of the view may be mentioned below the figure, that may be used to identify the view.
[033] Referring now to FIG. 6, a first set of views 600A of an example drawing pertaining to a third angle projection and a second set of views 600B pertaining to a first angle projection are illustrated. A front view of the first set of views 600A may be identified to be similar to a front view of the second set of views 600B. The above identification may be carried out using feature recognition on the shapes.
[034] Further, dimension data may be extracted from the different sets of views and stored in a spreadsheet. For example, in the FIG. 6, width and height information may be extracted from the front views. Depth and width information may be extracted from the top views. Height and length information may be extracted from the Right Hand Side (RHS) view. Further, any other feature dimension may be measured in their respective views (in which the complete view is visible).
[035] Referring now to FIG. 7, a process flow diagram of a process 700 of extracting information from the first image is illustrated. At step 702, the input image be fed into the system. At step 704, text from the title block, notes, and BOM may be extracted. It should be noted that the image may be blurred or some names may be abbreviated (i.e. non-standard format). For such, cases, the information may be extracted using a deep learning model. The output may be stored in tabular form 800, as shown in FIG. 8.
[036] In some embodiments, the information may be extracted from the second image and stored in a respective table of a spreadsheet. Thereafter, both the spreadsheets (i.e. for the first image and the second image) may be compared, in order to train the AI model. Further, if any mismatch is found based on the comparison, the respective parts may be marked, for example, encircled in the images. Based on the comparison, the AI model is trained.
[037] At step 706, each view of first image may be compared with a corresponding view of the second image (as indicated by the encircled view names in the FIG. 7), based on feature segmentation. The AI model may be trained based on the comparison. Further, dimensions (corresponding to a view) may be captured from the first image and then same dimensions (corresponding to a similar view) may be located on the second image. Further, the units of the dimensions may be changed, and accordingly, value of the dimensions in the new units may be calculated.
[038] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
,CLAIMS:WE CLAIM:

1. A method of extracting data and reviewing an engineering drawing, the method comprising:
receiving a first image, wherein the first image is one of a scanned image or a vector image of an engineering drawing, wherein the first image comprises one or more views of an object;
receiving a second image, wherein the second image is one of a scanned image or a vector image of the engineering drawing, wherein the second image comprises one or more views of an object, wherein the second image is a recreated version of the first image;
identifying one or more text elements from the first image, wherein the one or more text elements comprise a title block, notes, revision table and bill of materials (BOM);
for each of the first image and the second image,
classifying an angle of projection associated with each of the one or more views into one or more categories;
comparing to match similar views from the first image and the second image, based on feature segmentation;
upon comparison, mapping dimensions from the first image onto the second image; and
recording the one or more text elements and the dimensions in a database.

2. The method as claimed in claim 1, wherein identifying the one or more text elements from the first view image comprises:
mapping the identified one or more text elements with a repository to determine standardized text elements corresponding to the one or more text elements

3. The method as claimed in claim 2, wherein the one or more text elements are identified based on at least one of an OpenCV model, a Tesseract model, and a Deep Leaning model

4. The method as claimed in claim 1, wherein upon comparison, the method comprises:
identifying one or more mismatched regions in the second image; and
marking the identified one or more mismatched regions.

5. The method as claimed in claim 1, wherein the one or more categories comprise a front view, a top view, a section view, a detailed view, an auxiliary view, and an orthographic view.

6. The method as claimed in claim 1, wherein mapping dimensions further comprises:
converting values or metric units of the dimensions.

Documents

Application Documents

# Name Date
1 202141047316-STATEMENT OF UNDERTAKING (FORM 3) [19-10-2021(online)].pdf 2021-10-19
2 202141047316-PROVISIONAL SPECIFICATION [19-10-2021(online)].pdf 2021-10-19
3 202141047316-POWER OF AUTHORITY [19-10-2021(online)].pdf 2021-10-19
4 202141047316-FORM 1 [19-10-2021(online)].pdf 2021-10-19
5 202141047316-DRAWINGS [19-10-2021(online)].pdf 2021-10-19
6 202141047316-DECLARATION OF INVENTORSHIP (FORM 5) [19-10-2021(online)].pdf 2021-10-19
7 202141047316-Proof of Right [05-04-2022(online)].pdf 2022-04-05
8 202141047316-SEQUENCE LISTING (.txt) [27-04-2022(online)].txt 2022-04-27
9 202141047316-DRAWING [27-04-2022(online)].pdf 2022-04-27
10 202141047316-CORRESPONDENCE-OTHERS [27-04-2022(online)].pdf 2022-04-27
11 202141047316-COMPLETE SPECIFICATION [27-04-2022(online)].pdf 2022-04-27