Abstract: System and method for document image processing are described. The instant invention verifies the legibility and provides quality assurance of scanned image of a document. It detects and extracts the critical areas/data zones in the document containing important data required for further processing of the document.The image quality assurance tests are applied on the extracted critical areas/data zones and based on the verification and the quality of these critical areas, the document I is either accepted or rejected. The system and method for detecting and determining the extractability of the critical areas increases the speed with which the document is processed, thereby increasing the productivity and efficiency of the system. The system is further customizable to configure and implement the specific requirements of the customers.
Technical field
The present invention relates to system and method for document image processing and more particularly to a system and method for verifying legibility and providing quality assurance of an image of a document.
Background of the invention
The check processing system usually forms the biggest chunk of instruments processed by banks today. Millions of checks are processed daily all over the world. In India, still almost all the banks perform the physical processing of the checks. This traditional processing of bank checks is a tedious process and is a drag on the banks' resources. A bank normally follows the following check clearing procedure. Whenever the bank receives a check, it is forwarded to one or more collection centers with manual processing involved at each stage. If it's the same bank's check, it is sent to that particular branch to which it belongs and then it gets cleared. This normally takes two working days. However, if it's a check from a different bank, it is first sent to a centralized bank from where it is sent to the bank to which it belongs. This is comparatively a more lengthy procedure and takes up to four long days. Thus, the whole process is highly time consuming for the bank as well as its customers. Also, the transportation and labor costs involved comprise almost 75% of the total processing cost.
With an image processing system, the entire processing is automated. However, the processing is still a major concern for all the banks as it is very difficult to successfully extract all the data from the check. Check Truncation is one of the revolutionary changes in banking systems. Check truncation refers to autoijnated processing of scanned copies of bank checks instead of physical checks. In this check truncation environment, only images of the checks are electronically transmitted between financial institutions and the paper documents are maintained and destroyed by the bank of first deposit as per the mandatory regulations. It allows a financial institution's customers to benefit from new products or services, such as online access to their check images.
The Check 21 Act passed by the USA senate has improved the check and remittance payment systems by allowing banks, financial institutions and remittance payment processors to exchange checks electronically instead of current paper-based exchange. Within past few years, banks, financial institutions and remittance payment processors have shifted away from paper-based financial document processing environment towards image-based processing environment, where images of checks and remittance processing stubs are used to perform data entry, correction and balancing operations. The adoption of such image-based document processing has resulted in improvements in processing throughput and reduced labor costs.
For processing the check, it must be assured that the image qualifies a jollity standard. The goal of Image Quality Assurance (IQA) is to provide some level of assurance that the check image renditions being created by the sending financial institutions/bank of first deposit are of suitable quality and legibility to support follow-on financial document processing operations by the receiving financial institution. The purpose of IQA, therefore, is to identify questionable or suspect document images, which might exhibit one or more image quality defects and viability problems. The image quality defects assessed by the IQA are as follows:
• Image of checks with folded comers
• Folded or torn document images
• Undersized images
• Document framing error
• Images of documents that are excessively skewed
• Images that exhibit poor contrast or excessive brightness
• Images of two documents that are overlapped causing one document to be partially obscured
• Piggyback document
• Horizontal streaks present in the image
• Images whose compressed image file sizes are too small or too large
• Excessive "Spot Noise" in the image
• Front-Rear image dimension mismatch
• Carbon strip detected
• Image out of focus
However, conventional IQA does not fully serve the purpose of image viability. A check consists of many areas that are critical for successfully processing it. These areas include payee name, date, CAR region (amount in numbers), LAR region (amount in words), signature region and MICR region. These areas must be clearly captured in the check image so that data can be correctly extracted. A very jsmall defect in the critical areas may render the check useless. Other areas of the pheck are not as critical, and checks can be successfully processed even if image quality defects are present in one or more than one of these non-critical areas. IQA jtreats the entire check image homogenously; hence there will be cases when IQA rejects the check that is still usable. For example, a check image can get rejected even if it has noise in non-critical areas of the check and the data needed for processing the check, i.e. the data in critical areas is perfectly intact. On the other hand, there might be cases where IQA accepts the check image that is not usable.
For instance, check image can get accepted if there is a small noise in the critical area such as CAR, LAR, etc., which otherwise is not usable. The problems faced by the IQA system are that it analyzes the image on the basis of parameters described above. IQA rejects approximately 10 to 20 per cent of the check images, out of which most are passed by visual inspection. Therefore, it rejects some of the usable checks also. Similarly, IQA might fail to detect small-but-critical regions, which are missing/destroyed in check images and pass the checks. When the check reaches the concerned bank for clearing, it is rejected by visual verification for missing data. Consequently, IQA in hs current form does not optimize productivity as visual verification catches good quality checks, which are not suitable for data extraction and vice versa.
There are numerous patents that describe about processing documents with improved image quality assurance.
US 2005/0243379 describes about a document processing system comprising an image capture subsystem for capturing selected image metrics and at least one image rendition from a plurality of documents and for determining if at least one of the selected image metrics for any of the at least one image rendition does not successfully compare against preselected image quality metric threshold values. An image quality flag is generated for any of the at least one image rendition if it does not successfully compare, and a record entry for each imaged document having at least one flagged image rendition is created in an image quality flail file. An image index file for individually accessing the image renditions is modified to include a reference to the corresponding image quality flag file record entry The document processing system may optionally compare selected document metrics against preselected document metrics in a similar manner. Image defects in the plurality of documents can be identified by examining the record entries in the image quality flag file.
US 2005/0018896 provides a method for verifying legibility of an image of a check captured in digital image data, the check having two groups of characters on a front side thereof Each group includes characters which are numeric representations. The characters in one group have a predetermined relationship to the characters in another group. Each said group is spaced apart from each other and located in a reselected pattern. The method includes the steps of extracting images of each said group from the digital image data, recognizing the images to provide image values of the characters, performing operations on certain image values for one group in accordance with the relationship to provide calculated values, and comparing the image values for another group with the calculated values. If the image and calculated values are not identical, a warning signal is generated.
US 6351553 disclose a method and apparatus are used to provide quality
assurance for the electronic transfer of document image files, for example,
between banks. The documents may be, in the case of a bank, negotiable
instruments, checks, deposit slips or other transactional documents. The document
image file contains an image tag file and associated image data file. An image tag
file contains first quality assurance data. An image data file contains second
quality assurance data. The quality assurance data may be a MICR line from the
document. A first reader extracts the first quality assurance data from the image
i tag file. A second reader reads the second quality assurance data from the ijmage
data file. A comparator receives the first and second sets of quality assuranci data from the first and second readers and compares the first and second sets of quality assurance data to find correspondence between the data. The level of the correspondence provides an indicator of the quality of the image data file and associated image tag file data.
However, the above-mentioned patent documents do not disclose any of the technique for providing quality assurance as in the instant invention.
Therefore, there is a need for a system and method that identifies the critical I areas of the checks containing important data needed for further processing and rejects and accepts the checks based on correct assessment of extractability from these critical areas. Such a system will lead to significant increase in productivity and efficiency of the system. The system is fully customizable to configure and implement the specific requirements of the customers as the critical data needed for processing the check vary from one country to another country and even from one bank to another bank.
Objects and Summary
The object of the present invention is to provide a system and method for document image processing.
It is an object of the present invention to provide a system and method for verifying legibility and providing quality assurance of an image of a document.
It is another object of the present invention to detect and extract critical data from the image of the document using optical character recognition and intelligent character recognition.
It is yet another object of the present invention to apply image quality assujrance tests to the critical data based on which the document is accepted or rejected.
Further object of the present invention is to increase the speed of processing the document thereby increasing the efficiency and productivity of the system.
To achieve the aforementioned objects, the present invention provides a method for providing quality assurance to an image of a document, said method comprising the steps of:
• detecting, extracting and separating machine-printed and handwritten critical areas/data zones from the image;
• performing quality assessment of the extracted and separated critical areas/data zones using image quality assurance tests; and
• accepting or rejecting the document based on the pre-defined quality assessment techniques.
The present invention further provides a system for providing quality assurance to an image of a document, said system comprising of:
• means for detecting, extracting and separating machine-printed and handwritten critical areas/data zones from the image;
• means for performing quality assessment of the extracted and separated critical areas/data zones using image quality assurance tests; and
• means for accepting or rejecting the document based on the pre¬defined quality assessment techniques.
Brief description of drawings
Fig. 1 depicts the system for assessing the document image viability according to
the present invention
Fig. 2 depicts the method for assessing the document image viability according to
the present invention
Fig. 3a to Fig. 3i illustrates the various examples of check document for
implementing the preferred embodiment of the present invention
Fig. 4 illustrates the exemplary method for extracting CAR region from the fcheck
document
Fig. 5 illustrates the exemplary method for extracting CAR amount from the check
document
Fig. 6 illustrates the exemplary method for extracting LAR region from the check
document
Fig. 7 illustrates the exemplary method for extracting LAR amount from the check
document
Fig. 8 illustrates the exemplary method for extracting date and account number
from the check document
Fig. 9 illustrates the exemplary method for extracting user's signature from the
Most Probable Signature Region
Fig. 10 illustrates the exemplary method for detecting the presence of payee name
in the payee name field of the check document
Detailed description
An embodiment of the present invention implementing said system and method for assessing viability of documents is next explained by referring to the accompanying figures.
According to an embodiment of the present invention, physical documents contain both critical and non-critical areas. The present invention detects these areas and using pre-defined quality assessment techniques applies image quality assujrance tests on these individual areas based on certain guidelines/parameters, as defined by IQA. If all the critical areas are qualified as fit for data extraction by these tests, the document is subjected to further processing.
The critical areas vary according to the requirements of various countries and central banks. The presented system is configurable to suit different requirements.
Fig. 1 depicts a system 100 for assessing document image viability according to the present invention. The physical document 102 is subjected to pass through an acquisition unit 104 to obtain scanned image of the document, which is stored on any storage device.
The acquisition unit can be connected to the system 100 by connecting means or it can be coupled to the system 100 through a network. The scanned image is subjected to image processing system 106 that facilitates in extracting or detecting critical areas from the image by using OCR (Optical Character Recognition) engine and ICR (Intelligent Character Recognition) engine 108. After detection of critical areas, subset of Image Quality Assurance (IQA) system 110 performs quality assessment of critical areas and the resulting processed image 112 is sent for further processing and clearing. The critical and non-critical areas may vary from one country to another and from one bank to other.
As the present system is fully customizable, it can be configured and to implement the specific requirements of the customers.
Fig. 2 depicts the method 200 for assessing the document image viability according to the present invention.
At block 202, the physical document is input to the scanner or image acquisition unit.
At block 204, scanned image of the document is obtained, which will be subjected to processing based on critical areas containing important data to be analyzed!
At block 206, detection or extraction of critical areas takes place. For example, if it is a check document, then the critical areas could be Endorsements, Date, Payee line.
Signature, Courtesy (CAR region) and legal amounts (LAR region), Account number.
Bank name and address and MICR region.
After the data zones are extracted, at block 208 the user is provided the option of applying IQA test on the whole scanned image. This step is optional as the present system is sufficient and complete to assess the document viability based on critical areas quality assurance.
At block 210, subset of IQA tests is applied on critical areas or data zones that are extracted in step 206. If the tests for these zones are qualified at block 212, then at block 214, it is checked if all the parameters for verifying the quality of the critical areas or data zones are assessed. If not then at block 216, the next parameter is taken and control is again transferred to block 210 to assess the quality of that parameter. If all the parameters are assessed then at block 218, the document is sent for further processing.
If on applying subset of IQA tests on the data zones, at block 220 it fails or is not successful then the reasons for its failure are checked. It can either be data missing at block 222 or data not readable at block 226. If the data i.e. either printed data or handwritten data is found missing in the document then it is rejected at block 224 and is not further processed. But if the data is unreadable in the document image, which can be due to scanner error or power failure or some other reasons, the document is again sent for scanning to get proper image of the document. After rescanning, the whole procedure again starts from block 206. So, based op the critical areas the document image viability is assessed and hence it saves a lot of time in the processing of documents.
The present invention will now be explained by means of examples. The document used by the system of the present invention for illustrating a preferred embodiment is a check document.
Fig. 3a shows a check image having a quality defect of skewed image accepted by
the system of the present invention. The skewed check image will be rejected by
generalized IQA as its function is to assess the image based on the quality but it
will be accepted by the system of the present invention as it detects the cntical
areas or data zones containing important parameters mentioned above, which are
clearly readable, and hence, will be passed by the system.
Fig. 3b shows a partial scanned and torn image rejected by the IQA system. This image is accepted by the system of the present invention, as important data areas indicated by arrows are clearly readable.
Fig. 3c shows the check image that is too dark to be accepted by the IQA. Since the data zones detected by the system are clearly readable, it will be accepted by the system of the present invention. Similarly as shown in Fig. 3d, the check image is too light, hence rejected by IQA. The system extracts the usable data successfully from the image thereby making it useful.
Fig. 3e shows a check with a noise on the entire scanned image. The image is accepted by the system, as data zones are clearly readable, making the check usable.
Fig. 3f shows a high-quality scanned check image document. This image is accepted by IQA, as the quality of the image is highly efficient but will be rejected by the system of the present invention, as the critical areas containing important data are not complete as one of the data zone is missing. It will be detected by the system at the initial stage of processing of the check hence saves a lot of time,
Fig. 3g shows a clearly scanned check image with all the data zones properly and clearly readable except a small blot in the CAR region. This check will be accepted by IQA and sent for clearance. However, the check will not be accepted by the system of the present invention, as one of the data zone is not clearly readable. The CAR data zone is the most vital zone required in the process of clearance of the check and hence the check would not be sent for further processing.
Fig. 3h shows a clearly scanned check image with a blot in the date region. It will be accepted by IQA but rejected by the system of the present invention.
Fig. 3i shows a good quality scanned check image in which there is a small torn portion in the signature data zone of the customer. It will be accepted by IQA but rejected by the system of the present invention.
The above-mentioned figures indicate the checks that are marked usable by IQA but unusable by the system of the present invention and vice versa. The detection and extraction of different fields of a check document is now explained by referring to Figs 4 to 10.
A standard bank check consists of two portions to be recognized for the amount. The first location is the CAR amount or the numeric field and the other is the LAR amount or the amount written in alphabetic characters. The portion of the check that consists of the courtesy amount has 'Rs.'(Indian currency symbol) or '$'(US currency symbol) or any other currency symbol printed on it followed by the handwritten check amount. The banks always face a problem with the recognition of the Courtesy amount, commonly referred to as the CAR amount, on the checks or other financial documents. Efficient check processing has always been of paramount concern to the banks to increase the efficiency of their check system.
Preprocessing is an important step for automatic check processing in banking scenario where there is huge variation in writing style, especially the way in which Courtesy Amount is terminated and the fractional amount is written. Courtesy Amount Recognition (CAR) and Legal Amount Recognition (LAR) form the core of automated check processing system. However, other areas such as payee imam, signature region, date etc. play an equally vital role in automated 6heck processing.
Fig. 4 illustrates the method of detecting CAR region (courtesy-amount region) from the check document.
The check document is scanned to obtain binaries check image at 400, preprocesses this region, segments the courtesy amount into individual characters before feeding it to an ICR engine. For detecting the courtesy-amount region, a Most Probable Region (MPR) is detected at 402, based on configurable rules and semantic analysis. The binarized check image is pre-processed for noise removal in the CAR region at 404. Physical smearing is then performed at 406 by joining the big gaps in the image. Thereafter, line detection and line filtering is done at 408 and 410 respectively to pick up lines exceeding a given threshold. Line filtering is done to remove the lines that are very small in size, and only retain big line (horizontal and vertical) that are nearly of the same size as the CAR box.
After the line filtering, logical smearing on the detected lines is performed at 412 for joining big gaps between these detected lines. Then the ratio of horizontal and vertical lines is compared with a threshold value at 414. The threshold value is 2 in case of those check documents which have CAR box present in them. The line count calculation is not done in the check documents which have no CAR box present in them. If the ratio is more than the threshold value, the Inode identification is done at 416 to form CAR box with the help of detected horizontal and vertical lines. The node refers to the four coordinates of the CAR box. The CAR box is finally identified at 418 and is extracted with the help of the four coordinates at 420. If the ratio is less than the threshold value, then the noise removed most probable region at 422 is subjected to further 2 level noise removal at 424 to have a more cleaned image. At 426, the text in the CAR region is identified by the data extraction engine which can be intelligent character recognition engine or optical character recognition engine. If the data extraction fails at 428 then the image is again checked at 430 and the steps are repeatedi from the step 422. However, if the data is extracted successfully at 426, then the currency symbol is detected using optical character recognition engine at 432. If the currency symbol is not found at 434, the image is again subjected to checking at 430 and the steps are repeated from 422. If the currency symbol is detected at 434, then it is extracted with the help of four coordinates that were used for extracting CAR box at 420. When the image is checked at 430, and the currency symbol is still not detected at 438 and CAR box is not present at 440, the check is sent for manual verification. The number and the length of the lines, and the CAR region aid in segregating the LAR region from the check successfiilly.
Fig. 5 illustrates the exemplary method for extracting CAR amount from the check document.
The CAR box extracted at 502 is subjected to salted noise removal at 504 to remove small dots from the scanned image. The method intelligently removes currency symbols at 506, terminal characters at 508 and delimiters at 510 with a high degree of accuracy. The remaining data in the extracted region is the iCAR amount and the sub-CAR amount on the check at 512. The extracted region iS then further processed by cleaning and validation based on analysis of previous yalues at 514 and the result is the final CAR amount at 516.
The amount on the check is written in words in addition to the amount in numerals. The amount written in words is known as Legal amount and the recognition as Legal Amount Recognition or LAR. A LAR amount processing is carried out after the CAR amount recognition to determine the final amount on the check. Fig. 6 illustrates the exemplary method for extracting LAR region from the check document. The binarized check image at 602 is pre-processed for noise removal in the LAR region at 604. Physical smearing is then performed at 6D6 by joining the big gaps in the image. Thereafter, line detection is done at 608 td* pick up lines exceeding a given threshold. After the line detection, logical smearihg on the detected lines is performed at 610 for joining big gaps between these detected lines. At 612, it is checked if the detected lines origin is left of check center and
the detected lines end is right of check center and lines length is greater than half of the width of the check image. If it is greater than the half of the width of the check image then line filtering is done at 614 to remove the lines that are very small in size, and only retain big lines (horizontal and vertical) that are nearly of the same size as the LAR box. Thereafter, at 616 the remaining numbers of lines are counted. However, if the calculation at step 612 is less than the .half of the width of the check image then the step 614 is not performed and the control is directly transferred to step 616. At 618, it is checked if the line count is greater than or equal to three or the position of the first line with respect to the CAR box top is less than the threshold value. If none of the conditions are satisfied at 618, the LAR region is extracted with respect to the CAR box coordinates by farther processing and pre-calculated values. If either one of the condition or both the conditions at 618 are satisfied, the LAR region is extracted directly.
After the LAR region is extracted, Fig. 7 illustrates the exemplary method for extracting LAR amount from the check document.
The LAR box extracted at 702 is subjected to noise removal at 704 to further clean the image. At 706, the multiple to single line conversion is done since in a bank check, to write the amount in words, i.e. in LAR region two lines are printed on the check. The amount written in two lines is converted into one single line This helps in improving the data accuracy of the identified text. The printed currency symbol is removed at 708. At 710, word segmentation is done to separate the whole amount in single words. For example, two thousand four hundred twilit is
segmented into two, thousand, four, hundred, twenty for accurate identification of words. At 712, if the words identified at 710 are OCRable then at 716 the optical character recognition is done and the LAR amount is extracted at 718. If the words
are not OCRable then at 714 the sequence matching is done with the help of LAR processing elements which are known as LP elements developed through experimentation and analysis. These elements help in the identification of handwritten text and finally LAR amount is extracted at 718.
Fig. 8 illustrates the exemplary method for extracting date and account number from the check document
A Most Probable Data Region, MPDR is detected at 802 with the help of analysis and pre-determined algorithms. The region is then searched for date and account no. strings after pre-processing steps like filtering, line detection etc. Component filtering is done at 804 so that data can be read by identifying components in the scanned image. These components are analyzed on the basis of their sizje i.e. number of pixels making up the image and the position with respect to the printed "date". Component filtering helps in segregating the useful components: from noise. At 806, line detection is performed to detect the presence of lines date region and if line is found at 808 then the position of printed date is estimated using intelligent character recognition engine at 810. The date is extracted alt 812 and finally the noise and lines are removed at 814. If the line is not found at 808, then the control is transferred to step 814 to remove the noise. At 816, the account no string is detected using optical character recognition. At 818, the option of validating account no, date string and noise is displayed. Validation is done to know if the read data is account no, date or just noise. This is performed by comparing the value of the string to another string to differentiate between; date, account no or noise. If the string to be validated is account no, then the account no is detected at 820, validated at 822, and the noise is removed at 824. The final string of account no is extracted at 826. If the string to be validated is date string, the noise is removed at 828 and the string is validated at 830 and finally it is extracted at 832. If there is no noise in the check document then the control is transferred to 830.
Fig. 9 illustrates the exemplary method for extracting user's signatures forge the Most Probable Signature Region. Most Probable Signature Region, MPSjR, is detected at 902 and analyzed for presence of signature. The area is extracted with the help of CAR region already detected in the previous step. The authdrized stamp at the point of signatures is detected and removed at 904. The noise is removed at 906. At 908, component analysis and grouping is performed which means that MPSR is scanned to identify components which can be either text or noise. In order to differentiate between these components, all the components are combined into small groups. These groups are compared with the threshold value at 910 which can be five by way of example. If any of the group has more than five characters then those groups are removed at 912 and the remaining groups are retained to form a complete signature. At 914, the image is smeared by some factor or a threshold value to join broken components. The components are analyzed according to density, aspect ratio and its size. The component is finally extracted at 916 to obtain a signature image at 918.
Fig. 10 illustrates the exemplary method for extracting payee name from the Check document. Most Probable Payee Name region, MPPR, is extracted with the help of previously estimated LAR region at 1002. The noise is removed at 1004. The density of the extracted region is compared with the threshold value at 1006. If the density is less than the threshold value, then payee name is not detected by the data extraction engine at 1008. If the density is more than the threshold value, then payee name is detected the data extraction engine at 1010.
advantages provided by the system and methods of the present invention are it saves a lot of time in the clearing process on the bank's end. The bank can quickly determine if they want to make a return decision or a re-scan request. It results in increasing the efficiency and productivity of the employees.
The present invention is not intended to be restricted to any particular form or arrangement, or any specific embodiment, or any specific use, disclosed herein, since the same may be modified in various particulars or relations without departing from the spirit or scope of the claimed invention herein below shown and described of which the system or method shown is intended only for illustration and disclosure of an operative embodiment and not to show all of the various forms or modifications in which this invention might be embodied or operated.
We Claim:
1. A method for providing quality assurance to a scanned image of a
document, said method comprising the steps of:
• detecting, extracting and separating machine-printed and handwritten critical areas/data zones from the scanned image;
• performing quality assessment of the extracted and separated critical areas/data zones using image quality assurance tests; and
• accepting or rejecting the document based on the pre-defined quality assessment techniques.
2. The method as claimed in claim 1, wherein the machine printed critical areas/data zones are detected and extracted using optical character recognition.
3. The method as claimed in claim 1, wherein the handwritten critical areas/data zones are detected and extracted using intelligent character recognition.
4. The method as claimed in claim 1, wherein the quality assessment is performed for the critical areas/data zones iteratively.
5. The method as claimed in claim 1, wherein the document is accepted when all the critical areas/data zones are verified by the quality assessment techniques.
6. The method as claimed in claim 5, wherein the document is accepted if the image does not embed horizontal or vertical streaks, excessive "spot noise" or is not skewed, folded, torn, too dark, too light, undersized, oversize^ or a combination thereof in the critical areas/data zones.
7. The method as claimed in claim 1, wherein the document is rejected if one or more of the critical areas/data zones are corrupted or missing in the image of the document.
8. The method as claimed in claim 1, wherein the document is a fin4ncial document or a non-financial document.
9. The method as claimed in claim 1, wherein the image quality assurance tests are pre-defined or user defined.
10. A system for providing quality assurance to a scanned image of a document, said system comprising of:
• means for detecting, extracting and separating machine-printed and handwritten critical areas/data zones from the scanned image;
• means for performing quality assessment of the extracted and separated critical areas/data zones using image quality assurance tests; and
• means for accepting or rejecting the document based on the predefined quality assessment techniques.
11. The system as claimed in claim 10, wherein the means for detecting and extracting machine-printed critical areas/data zones are optical character recognition,
12. The system as claimed in claim 10, wherein the means for detecting and extracting handwritten critical areas/data zones are intelligent character recognition.
13. The system as claimed in claim 10, wherein the means for performing the quality assessment is configured to be performed for the critical areas/data zones iteratively.
14. The system as claimed in claim 10, wherein said means accepts the document when all the critical areas/data zones are verified by the quality assessment techniques.
15. The system as claimed in claim 14, wherein said means accepts the document if the image does not embed horizontal or vertical streaks, excessive "spot noise" or is not skewed, folded, torn, too dark, too light.
undersized, oversized or a combination thereof in the critical areas/data zones.
16. The system as claimed in claim 10, wherein said means rejects the document if one or more of the critical areas/data zones are corrupted or missing in the image of the document.
17. The system as claimed in claim 10, wherein the document is a financial document or a non-financial document.
18. The system as claimed in claim 10, wherein the image quality assurance tests are pre-defined or user defined.
19. A computer program product for providing quality assurance to a scanned
image of a document, comprising one or more computer readable media configured to perform said method.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 1430-CHE-2008-RELEVANT DOCUMENTS [27-12-2023(online)].pdf | 2023-12-27 |
| 1 | 1430-che-2008power of attorney 09-06-2009.pdf | 2009-06-09 |
| 2 | 1430-che-2008 form-9 09-06-2009.pdf | 2009-06-09 |
| 2 | 1430-CHE-2008-RELEVANT DOCUMENTS [05-02-2021(online)].pdf | 2021-02-05 |
| 3 | 1430-CHE-2008-FORM 13 [24-09-2020(online)].pdf | 2020-09-24 |
| 3 | 1430-che-2008 form-5 09-06-2009.pdf | 2009-06-09 |
| 4 | 1430-CHE-2008-FORM-15 [10-09-2020(online)].pdf | 2020-09-10 |
| 4 | 1430-che-2008 form-3 09-06-2009.pdf | 2009-06-09 |
| 5 | Abstract_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 5 | 1430-che-2008 form-2 09-06-2009.pdf | 2009-06-09 |
| 6 | Claims_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 6 | 1430-che-2008 form-1 09-06-2009.pdf | 2009-06-09 |
| 7 | Description_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 7 | 1430-CHE-2008 DRAWINGS 09-06-2009.pdf | 2009-06-09 |
| 8 | Drawings_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 8 | 1430-CHE-2008 DESCRIPTION (COMPLETE) 09-06-2009.pdf | 2009-06-09 |
| 9 | 1430-che-2008 correspondence others 09-06-2009.pdf | 2009-06-09 |
| 9 | Other Patent Document [31-01-2017(online)].pdf | 2017-01-31 |
| 10 | 1430-che-2008 claims 09-06-2009.pdf | 2009-06-09 |
| 10 | HEARING ADJOURNMENT [16-12-2016(online)].pdf | 2016-12-16 |
| 11 | 1430-CHE-2008 ABSTRACT 09-06-2009.pdf | 2009-06-09 |
| 11 | 1430-CHE-2008_EXAMREPORT.pdf | 2016-07-02 |
| 12 | 1430-CHE-2008 EXAMINATION REPORT REPLY RECEIVED 23-03-2015.pdf | 2015-03-23 |
| 12 | 1430-CHE-2008 FORM-3.pdf | 2012-02-14 |
| 13 | 1430-CHE-2008 FORM-1.pdf | 2012-02-14 |
| 13 | abstract.pdf | 2015-03-13 |
| 14 | 1430-CHE-2008 DRAWINGS.pdf | 2012-02-14 |
| 14 | claims.pdf | 2015-03-13 |
| 15 | 1430-CHE-2008 DESCRIPTION (PROVISIONAL).pdf | 2012-02-14 |
| 15 | others.pdf | 2015-03-13 |
| 16 | 1430-CHE-2008 CORREPONDENCE OTHERS.pdf | 2012-02-14 |
| 16 | Reply to FER.pdf | 2015-03-13 |
| 17 | spec..pdf ONLINE | 2015-03-09 |
| 17 | spec..pdf | 2015-03-13 |
| 18 | abstract.pdf ONLINE | 2015-03-09 |
| 18 | Reply to FER.pdf ONLINE | 2015-03-09 |
| 19 | claims.pdf ONLINE | 2015-03-09 |
| 19 | others.pdf ONLINE | 2015-03-09 |
| 20 | claims.pdf ONLINE | 2015-03-09 |
| 20 | others.pdf ONLINE | 2015-03-09 |
| 21 | abstract.pdf ONLINE | 2015-03-09 |
| 21 | Reply to FER.pdf ONLINE | 2015-03-09 |
| 22 | spec..pdf | 2015-03-13 |
| 22 | spec..pdf ONLINE | 2015-03-09 |
| 23 | 1430-CHE-2008 CORREPONDENCE OTHERS.pdf | 2012-02-14 |
| 23 | Reply to FER.pdf | 2015-03-13 |
| 24 | others.pdf | 2015-03-13 |
| 24 | 1430-CHE-2008 DESCRIPTION (PROVISIONAL).pdf | 2012-02-14 |
| 25 | 1430-CHE-2008 DRAWINGS.pdf | 2012-02-14 |
| 25 | claims.pdf | 2015-03-13 |
| 26 | 1430-CHE-2008 FORM-1.pdf | 2012-02-14 |
| 26 | abstract.pdf | 2015-03-13 |
| 27 | 1430-CHE-2008 EXAMINATION REPORT REPLY RECEIVED 23-03-2015.pdf | 2015-03-23 |
| 27 | 1430-CHE-2008 FORM-3.pdf | 2012-02-14 |
| 28 | 1430-CHE-2008 ABSTRACT 09-06-2009.pdf | 2009-06-09 |
| 28 | 1430-CHE-2008_EXAMREPORT.pdf | 2016-07-02 |
| 29 | 1430-che-2008 claims 09-06-2009.pdf | 2009-06-09 |
| 29 | HEARING ADJOURNMENT [16-12-2016(online)].pdf | 2016-12-16 |
| 30 | 1430-che-2008 correspondence others 09-06-2009.pdf | 2009-06-09 |
| 30 | Other Patent Document [31-01-2017(online)].pdf | 2017-01-31 |
| 31 | Drawings_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 31 | 1430-CHE-2008 DESCRIPTION (COMPLETE) 09-06-2009.pdf | 2009-06-09 |
| 32 | Description_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 32 | 1430-CHE-2008 DRAWINGS 09-06-2009.pdf | 2009-06-09 |
| 33 | Claims_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 33 | 1430-che-2008 form-1 09-06-2009.pdf | 2009-06-09 |
| 34 | Abstract_Granted 280546_22-02-2017.pdf | 2017-02-22 |
| 34 | 1430-che-2008 form-2 09-06-2009.pdf | 2009-06-09 |
| 35 | 1430-CHE-2008-FORM-15 [10-09-2020(online)].pdf | 2020-09-10 |
| 35 | 1430-che-2008 form-3 09-06-2009.pdf | 2009-06-09 |
| 36 | 1430-CHE-2008-FORM 13 [24-09-2020(online)].pdf | 2020-09-24 |
| 36 | 1430-che-2008 form-5 09-06-2009.pdf | 2009-06-09 |
| 37 | 1430-che-2008 form-9 09-06-2009.pdf | 2009-06-09 |
| 37 | 1430-CHE-2008-RELEVANT DOCUMENTS [05-02-2021(online)].pdf | 2021-02-05 |
| 38 | 1430-CHE-2008-RELEVANT DOCUMENTS [27-12-2023(online)].pdf | 2023-12-27 |
| 38 | 1430-che-2008power of attorney 09-06-2009.pdf | 2009-06-09 |