Sign In to Follow Application
View All Documents & Correspondence

System And Method Of Automated Document Processing For Structured, Semi Structured And Un Structured Documents

Abstract: According to the present disclosure, a method and system of automated document processing are disclosed. The method comprises the steps of: (i) obtaining at least one image of a scanned document in at least one format of TIFF, a PDF, a JPEG format, or an EXCEL, from an input unit, (ii) reading a content data along with a plurality of ruled lines from the image, (iii) generating a topography information from the content data (iv) storing the topography information (v) locating a relevant information on the image using the topography information, (vi) classifying the relevant information into a textual information and a non-textual information, (vii) recognizing and retrieving the textual information and the non-textual information simultaneously from the image and (viii) Performing validation, applying business rules to ensure the desired accuracy, formatting and rendering the textual information and the non-textual information of the image simultaneously on an output display unit.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 June 2018
Publication Number
52/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
tarun@khuranaandkhurana.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-05-30
Renewal Date

Applicants

DATAMATICS GLOBAL SERVICES LIMITED
KNOWLEDGE CENTRE, PLOT NO. 58, STREET NO.17, MIDC, ANDHERI (EAST), MUMBAI - 400093

Inventors

1. Dr Kanodia Lalit
Knowledge Centre, Plot No. 58, Street No. 17 MIDC, Andheri (East), Mumbai - 400093
2. Agarwal Rajesh
Knowledge Centre, Plot No. 58, Street No. 17, MIDC, Andheri (East), Mumbai - 400093
3. Nakrani Shantilal
Knowledge Centre, Plot No. 58, Street No. 17, MIDC, Andheri (East), Mumbai - 400093
4. Pandya Satyam
Knowledge Centre, Plot No. 58, Street No. 17,MIDC, Andheri (East), Mumbai - 400093

Specification

Claims:1. A method of automated document processing, the method comprising:
obtaining at least one image of a scanned document or an electronic document from an input unit, wherein the scanned document or the electronic document is in at least one format selected from a TIFF format, a PDF format, a JPEG format, or an EXCEL format;
reading a content data along with a plurality of ruled lines from the image, wherein the plurality of ruled lines comprises horizontal ruled lines and vertical ruled lines;
generating a topography information from the content data and the plurality of ruled lines, wherein the topography information comprises a line meta data, an OCR meta data and a zone meta data.;
storing the topography information for the content data;
locating a relevant information on the image using the topography information;
classifying the relevant information into a textual information and a non-textual information;
recognizing and retrieving the textual information and the non-textual information simultaneously from the image; and
rendering the textual information and the non-textual information of the image simultaneously on an output display unit.
2. The method as claimed in claim 1, wherein delivering the textual information and the non-textual information of the image to at least one enterprise application or file system.
3. The method as claimed in claim 1, wherein the scanned document is a structured document or a semi-structured document.
4. The method as claimed in claim 1, wherein the scanned document is an unstructured document.
5. The method as claimed in claim 1, wherein recognizing the textual information further comprises recognizing typewritten, printed and constrained characters from the image.
6. The method as claimed in claim 1, wherein recognizing the textual information further comprises recognizing font information from the image.
7. The method as claimed in claim 1, wherein recognizing the non-textual information comprises recognizing a layout information.
8. The method as claimed in claim 1, wherein recognizing the non-textual information further comprises recognizing OMR boxes and bar codes .
9. The method as claimed in claim 1, wherein the line data comprises (i) number of horizontal ruled lines and vertical ruled lines on the image, (ii) position of horizontal ruled lines and vertical ruled lines on the image, (iii) length and width of horizontal ruled lines and vertical ruled lines and (iv) intersections created by the horizontal and vertical lines.
10. The method as claimed in claim 1, wherein OCR meta data comprises number of characters on the image, position of each character on the image OCR confidence level of each character and one or more attributes of each character.
11. The method as claimed in claim 10 wherein the position of each character comprises x coordinate and y coordinate of each character.
12. The method as claimed in claim 10, wherein one or more attributes of each character comprises bold, italic and underlining of each character.
13. The method as claimed in claim 1, wherein the zone meta data comprises an area information occupied by each word on the image.

14. A method of automated document processing, the method comprising the steps of:
obtaining at least one image of a scanned document from an input unit;
preprocessing the image of the scanned document, wherein preprocessing comprises
pre-correcting the image;
reading a content data along with a plurality of ruled lines from the image;
generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines;
determining a template from the topography information;
wherein the topography information comprises a line meta data, an OCR meta data and a zone meta data;
capturing one or more fields of a relevant information on the image using the topography information;
processing one or more fields captured from the image to recognize and retrieve a relevant information comprising a textual information and a non-textual information;
validating one or more fields captured from the image; and
rendering the textual information and the non-textual information corresponding to the one or more fields captured from the image, simultaneously on an output display unit.
15. The method as claimed in claim 14, wherein the scanned document is a structured document or a semi-structured document.
16. The method as claimed in claim 14, wherein the scanned document is an unstructured document.
17. The method as claimed in claim 14, wherein pre-correcting the image comprises
correcting a defect on the image, wherein a defect comprises (i) misalignment of the image and (ii) out of sequence in pages of the image;
removing horizontal ruled lines and vertical ruled lines; and
repairing characters or lines printed on the image.
18. The method as claimed in claim 14, wherein processing one or more fields comprises
generating a plurality of micro zones and macro zones for one or more fields by using the zone meta data;
correcting automatically boundaries and content of the plurality of micro zones and macro zones; and
capturing automatically the textual information and the non-textual information from the plurality of micro zones and macro zones.
19. The method as claimed in claim 14, wherein validating one or more fields comprises
validating the relevant information on a field-to-field basis;
validating the relevant information on a document-to-document basis;
updating the relevant information on the image based on validation of relevant information;
reformatting one or more fields on the image;
validating one or more reformatted fields;
forming one or more derived fields on the image; and
assigning a flag to each field on the image.
20. The method as claimed in claim 14, wherein recognizing the textual information comprises recognizing typewritten, printed and constrained characters from the image.
21. The method as claimed in claim 14, wherein recognizing the textual information further comprises recognizing font information from the image.
22. The method as claimed in claim 14, wherein recognizing the non-textual information comprises recognizing a layout information.
23. The automated document processing method as claimed in claim 14, wherein recognizing the non-textual information further comprises recognizing OMR boxes and bar codes.
24. The method as claimed in claim 14, wherein the line data comprises (i) number of horizontal ruled lines and vertical ruled lines on the image, (ii) position of horizontal ruled lines and vertical ruled lines on the image, (iii) length and width of horizontal ruled lines and vertical ruled lines and (iv) intersections created by the horizontal and vertical lines.
25. The method as claimed in claim 14, wherein OCR meta data comprises number of words and characters on the image, position of each character on the image OCR confidence level of each character and one or more attributes of each character.
26. The method as claimed in claim 14, wherein the zone meta data comprises an area information occupied by each word or line on the image.

27. An automated document processing system, comprising:
an input unit to provide at least one image of a scanned document;
a memory unit comprising comprise a primary memory unit and a secondary memory unit;
an output display unit; and
a central processing unit comprising one or more processors configured to perform a method of automated document processing, wherein the method comprises the steps of:
obtaining at least one image of the scanned document from the input unit;
preprocessing the image of the scanned document, wherein preprocessing comprises
pre-correcting the image;
reading a content data along with a plurality of ruled lines from the image;
generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines;
determining a template from the topography information;
wherein the topography information comprises a line meta data, a OCR meta data and a zone meta data;
capturing one or more fields of a relevant information on the image using the topography information;
processing one or more fields captured from the image to recognize and retrieve a relevant information comprising a textual information and a non-textual information;
validating one or more fields captured from the image; and
rendering the textual information and the non-textual information corresponding to the one or more fields captured from the image, simultaneously on the output display unit.
28. The automated document processing system as claimed in claim 27, wherein the scanned document is a structured document or a semi-structured document.
29. The automated document processing system as claimed in claim 27, wherein the scanned document is an unstructured document.
30. The automated document processing system as claimed in claim 27, wherein the content data comprises a semi-structured or an unstructured information of the image.
31. The automated document processing system as claimed in claim 27, wherein the line data comprises (i) number of horizontal ruled lines and vertical ruled lines on the image, (ii) position of horizontal ruled lines and vertical ruled lines on the image, (iii) length and width of horizontal ruled lines and vertical ruled lines and (iv) intersections created by the horizontal and vertical lines.
32. The automated document processing system as claimed in claim 27, wherein OCR meta data comprises number of characters on the image, position of each character on the image and one or more attributes of each character.
33. The automated document processing system as claimed in claim 32, wherein the position of each character comprises x coordinate and y coordinate of each character.
34. The automated document processing system as claimed in claim 32, wherein one or more attributes of each character comprises bold, italic and underlining of each character.
35. The automated document processing system as claimed in claim 27, wherein the zone meta data comprises an area information occupied by each word or line on the image.

, Description:SYSTEM AND METHOD OF AUTOMATED DOCUMENT PROCESSING FOR STRUCTURED, SEMI-STRUCTURED AND UN-STRUCTURED DOCUMENTS
TECHNICAL FIELD
[0001] The present disclosure generally relates to document processing, and more particularly, the present disclosure relates to an automated processing of structured, semi-structured and unstructured documents.
BACKGROUND
[0002] Digital images having depicted therein a document such as an order, a contract, an Invoice, a health claims, a statement, a Payslip, an Application Form etc. have conventionally been captured and processed using a scanner or multifunction peripheral coupled to a computer workstation such as a laptop or desktop computer. Methods and systems capable of performing such capture and processing are well known in the art and well adapted to the tasks for which they are employed.

[0003] Further, documents from 3rd parties for the benefit of the customers, an enterprise may be needed. The enterprise can be a health care provider, banks and insurance, manufacturing, logistics, retail industries government agency, corporation, organization, or other commercial, non-profit, charitable entity, or the like. These documents can include invoices for services rendered by a 3rd party, medical/dental lab results, referral information, insurance information, and a variety of other documents the enterprise can use to render service to its customers or patients. Additionally, the enterprise may need to forward or distribute documents to 3rd parties for the benefit of the customers or patients of the enterprise. These documents can include prescriptions, care directives, requests for lab tests, invoices to insurance companies, and the like. This flow of documents into and out of the enterprise can be cumbersome and error-prone, especially when the documents are unstructured.

[0004] Unstructured documents are collections of information, the components of which are not readily distinguishable by computer processing means. Letters and Memos are an example of an unstructured document. Other examples of unstructured documents include text documents, faxes, emails, image files, and the like. Examples of Semi structures documents incudes Invoice, Orders, contracts and Examples of structured document includes Application form, Health claim documents. Other document formats can include bit mapped format, binary format, Joint Photographic Experts Group (JPEG) format, Tagged Image Format (TIF), Portable Document Format (PDF), Microsoft Excel (XLS, XLSX).

[0005] Extracting the information from the Unstructured and semi structured documents requires lots of manual efforts and time consuming because information can be anywhere on the document. Some solutions offered by conventional document processing systems include defining standardized forms or Templates for submittal to the enterprise. With the manual process and forms, extracted information is error prone and need to verify the information before it gets consumed by other system.

[0006] Therefore, there is a need for an automated processing of structured, semi-structured and unstructured documents.

SUMMARY
[0007] In one aspect of the present disclosure, a method of automated document processing is disclosed. The method comprises the steps of: (i) obtaining at least one image of a scanned document from an input unit, (ii) reading a content data along with a plurality of ruled lines from the image, (iii) generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines, (iv) storing the topography information for the content data, wherein the topography information comprises a line meta data, a OCR meta data and a zone meta data. (v) locating a relevant information on the image using the topography information, (vi) classifying the relevant information into a textual information and a non-textual information, (vii) recognizing the textual information and the non-textual information simultaneously from the image and (viii) Performing validation , applying business rules to ensure the desired accuracy ,formatting it delivering the textual information and the non-textual information comprising a Layout information , OMR tick boxes and Barcode of the image to at least one enterprise application such as CRM , ERP , Core Operation systems etc or to file systems such as TEXT, CSV,EXCEL, XML simultaneously on an output display unit

[0008] In another aspect of the present disclosure, a method of automated document processing is disclosed. The method comprising the steps of: (i) obtaining at least one image of a scanned document or electronic document from an input unit (ii) preprocessing (de-spackle ,de-skew , enhancing brightness and contrast , removing lines and unwanted colors) the image of the scanned document to ensure better OCR quality output , (iii) capturing one or more fields of a relevant information on the image using the topography information, (iv) processing one or more fields captured from the image to retrieve a relevant information comprising a textual information and a non-textual information, (v) validating one or more fields captured from the image and (vi) delivering the textual information and the non-textual information corresponding to the one or more fields captured from the image. The preprocessing step comprises pre-correcting the image, reading a content data along with a plurality of ruled lines from the image, generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines and determining a template from the topography information, wherein the topography information comprises a line meta data, a OCR meta data and a zone meta data.

[0009] In yet another aspect of the present disclosure, a document processing system is disclosed. The system comprises an input unit to provide an image of a scanned document or an electronic document and a memory unit comprising comprise a primary memory unit and a secondary memory unit, an output display unit, a central processing unit comprising one or more processors connected with the input unit. One or more processors are configured to perform an automated document processing method, wherein the method comprises the steps of: (i) obtaining at least one image of a scanned document or an electronic document from an input unit (ii) preprocessing the image of the scanned document, (iii) capturing one or more fields of a relevant information on the image using the topography information, (iv) processing one or more fields captured from the image to retrieve a relevant information comprising a textual information and a non-textual information, (v) validating one or more fields captured from the image and (vi) delivering the textual information and the non-textual information corresponding to the one or more fields captured from the image, to at least one enterprise application such as CRM , ERP , Core Operation systems etc or into File Systems such as TEXT, CSV,EXCEL, XML. simultaneously on an output display unit.

BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Figure 1, illustrates a flow chart with the three essential steps of the automated document processing in accordance with the present disclosure.
[0011] Figure 2, illustrates a flow chart for the preprocessing step in accordance with the present disclosure.
[0012] Figure 3, illustrates a schematic diagram of internal modules involved in the process of image processing and data extraction technique of the scanned document.
[0013] Figure 4, illustrates a flow chart for processing records step in accordance with the present disclosure.
[0014] Figure 5, illustrates a hardware architecture of an automated document processing system in accordance with the present disclosure.

DETAILED DESCRIPTION

[0015] A system for an automated document processing and a method for the same are disclosed. The automated document processing methods are disclosed for in use of processing structured, semi-structured and unstructured documents. The system is an artificial intelligent & fuzzy logic-based document processing engine, enabling an application to read, interpret and understand the topography of the contents, and classify the contents and extracts the required information automatically with minimal user intervention and stores in a specified data file.

[0016] In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known processes, structures and techniques have not been shown in detail in order not to obscure the clarity of this description. Various embodiments are described below in connection with the figures provided herein.

[0017] According to the present disclosure, the automated documented processing method comprises the steps of: obtaining at least one image of a scanned document or an electronic document, reading a content data along with a plurality of ruled lines from the image, and generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines. The scanned document or an electronic document may be in file format supportable by the input unit, selected from a group of formats comprising a TIFF format, a JPEG format, a BMP format, a PDF format and an EXCEL file format. In general, one or more images of the scanned document are obtained from an input unit, the input unit may be a part of an electronic device such as a scanner or a camera. In some embodiments, the input unit may be an electronic device connected externally with an electronic device. The electronic device may be not only limited to a computer, a laptop or a mobile/portable device or a handheld device. The scanned document may be a structured or semi-structured or an unstructured document. The content data on the scanned document may be structurally formed, semi-structured or unstructured, and thus the scanned document accordingly identified into structured, semi-structured or an unstructured document. The horizontal ruled lines and vertical ruled lines read as coordinates of x and y axis in the image of the scanned document.

[0018] The generated topography information comprises a line meta data, an OCR meta data and a zone meta data. The line meta data comprises an information about (i) number of horizontal ruled lines and vertical ruled lines on the image, (ii) position of horizontal ruled lines and vertical ruled lines on the image, (iii) length and width of horizontal ruled lines and vertical ruled lines and (iv) intersections created by the horizontal and vertical lines. The automated documented processing method further comprises the steps of: storing the topography information for the content data, locating a relevant information on the image using the topography information, classifying the relevant information into a textual information and a non-textual information, recognizing the textual information and the non-textual information simultaneously from the image and delivering the textual information and the non-textual information of the scanned document simultaneously on an output display unit. The textual information and the non-textual information is delivered to at least one enterprise application running on the output display unit. Such enterprise application may be not only limited to CRM , ERP , Core Operation systems etc or into File Systems such as TEXT, CSV,EXCEL, XML. The output display unit may be a part of or an electronic device such as an electronic display unit, a touch screen display or computer monitors or similar display devices.

[0019] According to the present disclosure, image processing technique of the image of the scanned document comprises at least three essential steps of:
(i) Preprocessing the image
(ii) capturing and processing fields of the image
(iii) processing records of the image
Referring to figure 1, illustrates a flow chart with the three essential steps of the automated document processing in accordance with the present disclosure. These three essential steps are further explained in the description below. Various types of data comprising the textual information such as typed/printed words and the non-textual information such as graphs, drawings can be captured from any document.

[0020] According to another embodiment of the present disclosure, an automated document processing method for processing structured, semi-structured and unstructured documents is disclosed. The method comprises the steps of: obtaining at least one image of a scanned document or an electronic document from an input unit, preprocessing the image of the scanned document, capturing one or more fields of a relevant information on the image using the topography information, processing one or more fields captured from the image to retrieve a relevant information comprising a textual information and a non-textual information, validating one or more fields captured from the image and delivering the textual information and the non-textual information corresponding to the one or more fields captured from the image simultaneously.

[0021] The scanned document is obtained as one or more image frames from an input unit, a clear image with a minimal error from the one or more image frames is selected for processing, and this image is called as a ‘master image’. The further input image frames are called as slave images. Each slave image is aligned and processed by matching the topography information of the master image with the topography information of the slave image. The input unit may be a part of an electronic device such as a scanner or a camera. In some embodiments, the input unit may be an electronic device connected externally with an electronic device. The electronic device may be not only limited to a computer, a laptop or a mobile/portable device or a handheld device.

[0022] Referring to figure 2, illustrates a flow chart for the preprocessing step in accordance with the present disclosure. In the preprocessing step, the image is pre-corrected if an error/ defect is detected on the image. Pre-correction is performed to remove noise, error or any defects on the image. The defect may comprise (i) misalignment of the image and (ii) out of sequence in pages of the multi-page image. Further, horizontal ruled lines and vertical ruled lines are removed from the image. Pre-correction also repairs characters or lines printed on the image. Pre-correction step further comprises establishing form integrity of the image of the scanned document, used in multi-page structured documents. Form integrity assigns page numbers in a multi-page image of the structured document, ensuring there is one and only image of every page and sorts pages if the pages are out of sequence. Then, the structured document is registered through an anchor point, leading to generate a Primary OCR Output file(PRO) file for the image of the scanned document.

[0023] The preprocessing step further comprises the step of reading a content data along with a plurality of ruled lines from the image of the scanned document, generating a topography information from the content data and the plurality of ruled lines comprising horizontal ruled lines and vertical ruled lines and determining a template from the topography information. The generation of topography information comprises the generation of a line meta data with Line Removed Output file (LRO file), generation of a OCR meta data with Unique Word Output file (UWO file) and a zone meta data. The line meta data comprises (i) number of horizontal ruled lines and vertical ruled lines on the image, (ii) position of horizontal ruled lines and vertical ruled lines on the image, (iii) length and width of horizontal ruled lines and vertical ruled lines and (iv) intersections created by the horizontal and vertical lines. The OCR meta data comprises number of characters on the image, position of each character on the image OCR confidence level of each character and one or more attributes of each character. and the zone meta data comprises an area information occupied by each word or line on the image. The preprocessing step further comprises the step of resizing the master image to a standard size and creating cross index file linking the textual information and the non-textual information (6 elements) in the master image with the slave image. The topography information of the master image is matched with the topography information of the slave image, and so the slave image is aligned and processed.

[0024] The processing one or more fields step comprises automatically generating a plurality of micro zones and macro zones for one or more fields of the image by using the zone meta data, correcting automatically boundaries and content of the plurality of micro zones and macro zones and capturing automatically the textual information and the non-textual information simultaneously from the plurality of micro zones and macro zones.

[0025] The contents of the micro zone may have errors due to OCR or ICR. These errors (in both SF & UF) can be automatically corrected and eliminated in case of structured documents (SF) or unstructured documents (UF), for example in a Numeric field Ø can be corrected to 0, in an Alpha field 2 can be converted to Z. Some corrections can be done by referring to a master image file. The processing one or more fields step further comprises auto dropping of contents of a micro zone into a grid, validating contents within a field and auto capturing of non-textual information. Auto dropping of contents determines whether the contents of a micro zone are correct, and contents can be auto dropped into the grid without any manual checking. This is only can be done if there are no validation errors. This is primarily based on confidence level of characters and fields as determined by an OCR engine coupled with business rules as applicable. Intra-field validations are done on the Stand Alone field. For example, e.g. Valid Date, Valid Amount, Numeric Field, etc. can be done. Auto capturing of non-textual information comprises capturing of layout information, one or more OMR tick boxes, one or more bar codes from the image . Referring to figure 3, illustrates a schematic diagram of internal modules involved in the process of image processing & data extracting technique of the scanned document.

[0026] Referring to figure 4, illustrates a flow chart for processing records step in accordance with the present disclosure. Processing records step comprises the steps of validating one or more fields captured from the image and delivering the textual information and the non-textual information corresponding to the one or more fields captured from image, wherein validating one or more fields comprises, (i) validating the relevant information on a field-to-field basis, (ii) validating the relevant information on a document-to-document basis, (iii) updating the relevant information on the image based on validation of the relevant information, (iv) reformatting one or more fields on the image, (v) validating one or more reformatted fields, (vi) forming one or more derived fields on the image; and assigning a flag to each field on the image. Delivering means displaying the record in a desired output format, for example TEXT, CSV, DBF, XML, EXCEL formats. The record and fields can be displayed on a screen of a computer or an electronic device.

[0027] The automatic document processing in accordance with the present disclosure may be implemented using artificial intelligence engines, fuzzy logic engines, neural networks, rule-based processing, or any other approach that permits automatic data processing. The document processing described in above embodiments is implementable based on a computer program, may be developed as an enterprise software/application. It may comprise a plurality of sub-programs and routines for performing a plurality of tasks. Each sub-program may be assigned to perform a specific task. This enterprise version works at (i) document level, (ii) field level and (iii) record level, and validated all data captured. Intra-document, inter-field and inter-document level validations can be performed.

[0028] In yet another embodiment of the present disclosure, a document processing system for performing the method said above is disclosed. Referring to figure 5, illustrates a hardware architecture of an automated document processing system in accordance with the present disclosure. The automated document processing system comprises an input unit to provide an image of a scanned document; a memory unit, a central processing unit comprising one or more processors configured to perform an automated document processing method, and an output display unit. The automated document processing method comprises the steps as mentioned in the above embodiments. The method comprises the steps of: obtaining at least one image of the scanned document from the input unit, preprocessing the image of the scanned document, capturing one or more fields of a relevant information on the image using the topography information, processing one or more fields captured from the image to retrieve a relevant information comprising a textual information and a non-textual information, validating one or more fields captured from the image and delivering the textual information and the non-textual information corresponding to the one or more fields captured from the image simultaneously on the output display unit..

[0029] The input unit may be a part of an electronic device such as a scanner or a camera. In some embodiments, the input unit may be an electronic device connected externally with an electronic device. The electronic device may be not only limited to a computer, a laptop or a mobile/portable device or a handheld device. The memory unit may comprise a primary memory unit and a secondary memory unit. The output display unit may be a part of or an electronic device such as an electronic display unit, a touch screen display or computer monitors or similar display devices. The central processing unit comprises a control unit, registers, an arithmetic logic unit and a processor memory unit capable of storing a plurality of instructions to perform one or more methods or tasks, and files. One or more processors are configured to process the plurality of programs, sub-programs and routines of the enterprise software or applications. Each task can be accomplished by the one or more processors of the central processing unit.

[0030] The document processing system need not require template to be created for unstructured document for locating and extracting the data. With semi-structured and unstructured documents, the system may be quite complex as data and files available are imperfect and have errors. Despite the erroneous data and file, the present document processing system capture error-free relevant information from the scanned document.

[0031] Although the present disclosure has been described in the context of certain aspects and embodiments, it will be understood by those skilled in the art that the present disclosure extends beyond the specific embodiments to alternative embodiments and/or uses of the disclosure and obvious implementations and equivalents thereof. Thus, it is intended that the scope of the present disclosure described herein should not be limited by the disclosed aspects and embodiments above

Documents

Application Documents

# Name Date
1 201821023251-FORM 1 [21-06-2018(online)].pdf 2018-06-21
2 201821023251-FIGURE OF ABSTRACT [21-06-2018(online)].pdf 2018-06-21
3 201821023251-DRAWINGS [21-06-2018(online)].pdf 2018-06-21
4 201821023251-DECLARATION OF INVENTORSHIP (FORM 5) [21-06-2018(online)].pdf 2018-06-21
5 201821023251-COMPLETE SPECIFICATION [21-06-2018(online)].pdf 2018-06-21
6 Abstract1.jpg 2018-08-11
7 201821023251-FORM-26 [21-09-2018(online)].pdf 2018-09-21
8 201821023251-OTHERS(ORIGINAL UR 6(1A) FORM 26)-260918.pdf 2018-12-19
9 201821023251-Proof of Right (MANDATORY) [14-02-2019(online)].pdf 2019-02-14
10 201821023251-ORIGINAL UR 6(1A) FORM 1-260219.pdf 2019-12-10
11 201821023251-FORM 18 [30-06-2020(online)].pdf 2020-06-30
12 201821023251-RELEVANT DOCUMENTS [07-07-2021(online)].pdf 2021-07-07
13 201821023251-POA [07-07-2021(online)].pdf 2021-07-07
14 201821023251-FORM 13 [07-07-2021(online)].pdf 2021-07-07
15 201821023251-FER.pdf 2021-11-12
16 201821023251-FORM 4(ii) [10-05-2022(online)].pdf 2022-05-10
17 201821023251-FER_SER_REPLY [09-07-2022(online)].pdf 2022-07-09
18 201821023251-DRAWING [09-07-2022(online)].pdf 2022-07-09
19 201821023251-CORRESPONDENCE [09-07-2022(online)].pdf 2022-07-09
20 201821023251-COMPLETE SPECIFICATION [09-07-2022(online)].pdf 2022-07-09
21 201821023251-CLAIMS [09-07-2022(online)].pdf 2022-07-09
22 201821023251-US(14)-HearingNotice-(HearingDate-19-04-2024).pdf 2024-03-27
23 201821023251-FORM-26 [18-04-2024(online)].pdf 2024-04-18
24 201821023251-Correspondence to notify the Controller [18-04-2024(online)].pdf 2024-04-18
25 201821023251-Written submissions and relevant documents [04-05-2024(online)].pdf 2024-05-04
26 201821023251-Annexure [04-05-2024(online)].pdf 2024-05-04
27 201821023251-PETITION UNDER RULE 137 [06-05-2024(online)].pdf 2024-05-06
28 201821023251-PatentCertificate30-05-2024.pdf 2024-05-30
29 201821023251-IntimationOfGrant30-05-2024.pdf 2024-05-30

Search Strategy

1 SearchHistoryE_11-11-2021.pdf

ERegister / Renewals

3rd: 28 Jun 2024

From 21/06/2020 - To 21/06/2021

4th: 28 Jun 2024

From 21/06/2021 - To 21/06/2022

5th: 28 Jun 2024

From 21/06/2022 - To 21/06/2023

6th: 28 Jun 2024

From 21/06/2023 - To 21/06/2024

7th: 28 Jun 2024

From 21/06/2024 - To 21/06/2025

8th: 13 May 2025

From 21/06/2025 - To 21/06/2026