Abstract: A system and a method for identification of alphanumeric characters present in a series in an image are disclosed. The system and method captures the image and further processes it for binarization by computing a pattern of the image. The generated binarized images are then filtered for removing unwanted components. Candidate images are identified out of the filtered binarized images. All the obtained candidate images are combined to generate a final candidate image which is further segmented in order to recognize a valid alphanumeric character present in the series.
FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
THE PATENT RULES 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
A SYSTEM AND METHOD FOR PROCESSING IMAGE FOR IDENTIFYING ALPHANUMERIC CHARACTERS PRESENT IN A SERIES
Applicant
TATA Consultancy Services Limited
A company Incorporated in India under The Companies Act 1956
Having address:
Nirmal Building 9th Floor
Nariman Point Mumbai 400021
Maharashtra India
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present invention in general relates to method and system for character identification. More particularly the invention relates to a method and system for identifying alphanumeric characters present in a series in an image.
BACKGROUND OF THE INVENTION
The images of the Vehicle Identification Number (VIN) are captured by the mobile phone camera by common people many of the times for some specific purpose in some extraordinary situations. Manual involvement in the capturing process uneven and insufficient illumination and unavailability of sophisticated focusing system yield poor quality images.
The performance of available open source Optical Character Recognition (OCR) systems on VIN images captured by mobile phones is extremely poor because of the image quality affected by various noises. Therefore image enhancement techniques need to be used before giving a scanned image as an input to Optical character recognition system. Binarization technique is used as an image enhancement technique to get the text region from a complex background more specifically the background texts.
The OCR for texts in mobile camera captured images consists of a variety of shortcomings. In the existing system it is required to extract individual characters on embedded mobile platform which has low memory and processing speed. Binarization technique is used as an image enhancement technique to get the text region from a complex background more specifically the background texts. Many Binarization techniques have been proposed to improve the recognition accuracy of the images. The existing Binarization techniques can improve the recognition accuracy of the images only up to 5.89% at most.
Therefore there is a need of a system and method capable of providing a suitable low complexity binarization technique which would improve the recognition accuracy of an image to a greater extent.
OBJECTS OF THE INVENTION
It is the primary object of the invention to provide a system and method for identification of alphanumeric characters present in a series in an image.
It is another object of the invention to provide a system and method for performing binarization of the image thus captured.
It is yet another embodiment of the invention to provide a system and method for removing unwanted over-segmented and under-segmented segments from the binarized images.
It is yet another object of the invention to provide a system and method for applying morphological closing for merging the multiple component labels in the valid alphanumeric characters.
SUMMARY OF THE INVENTION
The present invention provides a method for identification of alphanumeric characters present in a series in an image. The method comprises of processor implemented steps of capturing the image comprising the series of alphanumeric characters and processing the image for producing a set of identifiable characters out of the series of alphanumeric characters. The processing further comprises of computing a pattern for recognizing a pixel intensity distribution in the image for determining a background peak and a foreground peak generating a plurality of binarized images by selecting a plurality of dynamic threshold values between the background peak and the foreground peak and filtering the generated binarized images by removing unwanted components from plurality of images to identify one or more valid characters. The processing further comprises of identifying one or more candidate images by comparing the valid characters with respect to a known ground truth value generating a final candidate image by combining the candidate images such that the combination of the candidate images is dependent upon a predefined condition and splitting the final candidate image into a predefined segments and recognizing a valid alphanumeric character associated with each segment therein.
The present invention also provides a system for identification of alphanumeric characters present in a series in an image. The system comprises of an image capturing device for capturing the image comprising the alphanumeric characters present in the series and a processor configured to produce a set of identifiable characters out of the series of alphanumeric characters. The processor further comprises of a computing module configured to compute a pattern for recognizing a pixel intensity distribution in the image for determining a background peak and a foreground peak a binarization module configured to generate a plurality of binarized images by selecting a plurality of dynamic threshold values between the background peak and the foreground peak and a filter configured to remove unwanted components from the plurality of images to identify one or more valid characters. The processor further comprises of a comparator configured to compare the valid characters with respect to a known ground truth value in order to identify one or more candidate images and an image generator configured to generate a final candidate image by combining the candidate images such that the combination of the candidate images is dependent upon a predefined condition. The system further comprises of an output generating module configured to split the final candidate image into a predefined segments and recognizing a valid alphanumeric character associated with each segment therein.
BRIEF DESCRIPTION OF THE DRWAINGS
Figure 1 illustrates the system architecture in accordance with an embodiment of the invention.
Figure 2 illustrates an exemplary flowchart in accordance with an alternate embodiment of the invention.
Figure 3 illustrates the form of image after applying morphological closing in accordance with an alternate embodiment of the system.
Figure 4 illustrates comparative analyses of the binarization technique of the present invention with those of the prior arts in accordance with an embodiment of the invention.
DETAILED DESCRIPTION
Some embodiments of this invention illustrating its features will now be discussed:
The words “comprising” “having” “containing” and “including” and other forms thereof are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims the singular forms “a” “an” and “the” include plural references unless the context clearly dictates otherwise. Although any systems methods apparatuses and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention the preferred systems and parts are now described. In the following description for the purpose of explanation and understanding reference has been made to numerous embodiments for which the intent is not to limit the scope of the invention.
One or more components of the invention are described as module for the understanding of the specification. For example a module may include self-contained component in a hardware circuit comprising of logical gate semiconductor device integrated circuits or any other discrete component. The module may also be a part of any software programme executed by any hardware entity for example processor. The implementation of module as a software programme may include a set of logical instructions to be executed by the processor or any other hardware entity. Further a module may be incorporated with the set of instructions or a programme by means of an interface.
The disclosed embodiments are merely exemplary of the invention which may be embodied in various forms.
The present invention relates to a system and a method for identification of alphanumeric characters present in a series in an image. In the very first step two major peaks are identified from a pattern of the scale image and a number of binarized images are obtained. The components which are unwanted are removed from the binarized images. Further one or more candidate images are segmented such that each segment contains a valid character in order to generate a final candidate image.
In accordance with an embodiment referring to figure 1 the system (100) comprises of an image capturing device (102) adapted to capture the image comprising the alphanumeric characters present in the series (as shown in step 202 of figure 2) . The system further comprises of a processor (104) which is configured to produce a set of identifiable characters out of the series of alphanumeric characters (as shown in step 206 of figure 2). The processor further comprises of a computing module (106) a binarization module (108) a filter (110) a comparator (112) and an image generator (114).
In accordance with an embodiment still referring to figure 1 the image capturing device captures the image in a gray scale. The image capturing device may include a camera. This camera may be coupled to some other electronic device. By way of specific example the camera may be present in a mobile phone. The images are captured by the image capturing device (102) in a plurality of frames. These images may again comprise of a series of alphanumeric characters to be further identified and hence may include one or more type of noise. The captured images are further processed by the processor. The processor (104) then produces a set of identifiable characters out of the series of alphanumeric characters present in the image.
By way of a specific example the number of alphanumeric characters present in the series may include but is not limited to 17 alphanumeric characters.
The processor (104) further comprises of a computing module (106) which is configured to compute a pattern for recognizing a pixel intensity distribution in the image for determining a background peak and a foreground peak. The pixel intensity is recognized in a form of a histogram.
The computing module (106) enhances the quality of the input image by applying the retinex strategy (as shown in step 204 of figure 2). The image enhancement is based on two main observations that there are two sources of noises. One is multiplicative in nature that appears due to the background text and the reflection from the glass. The computing module (106) further converts the image into a gray scale image. A gray scale image is one in which the only colors are shades of gray. An intensity histogram of the gray scale image is computed which is a graph showing the number of pixels in an image at each different intensity value found in that image (as shown in step 208 of figure 2). By way of specific example for an 8-bit grayscale image there are 256 different possible intensities and so the histogram will graphically display 256 numbers showing the distribution of pixels amongst those grayscale values. Further from this intensity distribution two major peaks are identified one located near the value 0 and the other located near the value 255 (as shown in step 210 of figure 2). These peaks are represented as the background part and the foreground part of the image respectively.
The processor (104) further comprises of the binarization module (108) which is configured to generate a plurality of binarized images.
In accordance with an embodiment the disclosed binarization method is based on two main observations that there is a slight gray scale variation between the background text (BGT) and the text of interest (TOI) and strictly 17 alphanumeric characters are present in the captured image. A specific number (n) of dynamic threshold values (pixel values) between the background peak and the foreground peak are used for binarization (as shown in step 212 of figure 2). For an image in 8 bits per pixel format this number is 16 which are obtained heuristically. Thus n numbers of binarized images are obtained from the single gray scale image (as shown in step 214 of figure 2).
In accordance with an embodiment the foreground pixels of each such image is labeled using Connected-component labeling method. Connected-component labeling is an algorithmic application of graph theory where subsets of connected components are uniquely labeled based on a given heuristic. A graph containing vertices and connecting edges is constructed from the input image. The vertices contain information required by the comparison heuristic while the edges indicate connected ""neighbors"". An algorithm traverses the graph labeling the vertices based on the connectivity and relative values of their neighbors. Following the labeling stage the graph may be partitioned into subsets after which the original information can be recovered and processed.
The processor (104) further comprises of a filter (110) which is configured to remove unwanted components from the n number of binarized images to identify one or more valid characters (as shown in step 216 of figure 2). The components that are too small or too big are removed. A component is defined to be too small if the number of pixels with that particular level is less than 100 or if the component has a height (h) or width (w) less than 3 pixels. Similarly a component is defined to be too big if the number of pixels with that particular level is more than width/4 or if
h> (ht_image/3) or
w> (wd_image/4)
where
ht_image is the height of the image and wd_image is the width of the image.
The processor (104) further comprises of a comparator (112) configured to compare the valid characters with respect to a known ground truth value in order to identify one or more candidate images. The known ground truth value (k) is equal to the number of alphanumeric characters present in the series.
The comparator (112) is used to remove the unwanted components in order to identify the candidate images. If the number of components is less than k/2 it means that actual k number of characters is either very under segmented or the binarized image doesn""t include all valid characters as foreground (as shown in step 218 of figure 2). So this binarized image is not considered as a candidate image. Similarly if the number of components are greater than k*3 then on the average one valid character is over segmented to more than 3 segments (as shown in step 218 of figure 2). The over-segmented and under segmented binarized images are disregarded. The remaining binarized images are considered as the candidate images. Thus only a few valid images are left out of n binarized images. Typically the number of such candidate images for each input image is less than or equal to 3 (for a case where number of alphanumeric characters present in the series are 17).
The processor (104) further comprises of an image generator (114) which is configured to generate a final candidate image by combining the candidate images (as shown in step 220 of figure 2). The candidate images are combined by marking the pixels as background text (BGT) only if it is decided as a background text in more than half of the candidate images. On the fulfillment of this predefined condition the final candidate image is constructed.
The system (100) further comprises of an output generating module (116) which is configured to split the final candidate image into a predefined segments such that each segment contains only one valid character. The candidate image is split into a number equal to the number of alphanumeric characters present in the series (as shown in step 222 of figure 2).
In accordance with an embodiment a conventional method of skew correction is used prior to the segmentation. The following method of segmentation is based on the observation that the number of valid characters is equal to the number of alphanumeric characters present in the series (k). The steps involved in the character and numeral segmentation and recognition method is as follows:
• Identify the columns without any foreground pixel. If consecutive such rows are attained the middle of these columns is taken as the candidate cut column (CCC). Let the number of CCC obtained be n.
• Find the distance ( ) between the consecutive CCCs. Let the distance between the ith and the (i+1)th CCC be defined as
• Find the median ( ) of where n is the number of CCCs in the image. A heuristically obtained tolerance factor is used to define the threshold ( ) which is defined as .
• If k-1 components are obtained which are nearly equally spaced columns each segment is used as a candidate segment.
• If n>k-1 then it is concluded that some valid character is horizontally over segmented. Subsequently such CCC’s are merged and n is reduced by one iteratively.
• If n
| # | Name | Date |
|---|---|---|
| 1 | Form 3 [22-12-2016(online)].pdf | 2016-12-22 |
| 2 | ABSTRACT1.jpg | 2018-08-11 |
| 3 | 825-MUM-2012-FORM 26(9-4-2012).pdf | 2018-08-11 |
| 4 | 825-MUM-2012-FORM 26(24-5-2012).pdf | 2018-08-11 |
| 5 | 825-MUM-2012-FORM 18(29-3-2012).pdf | 2018-08-11 |
| 6 | 825-MUM-2012-FORM 1(11-5-2012).pdf | 2018-08-11 |
| 7 | 825-MUM-2012-CORRESPONDENCE(9-4-2012).pdf | 2018-08-11 |
| 8 | 825-MUM-2012-CORRESPONDENCE(29-3-2012).pdf | 2018-08-11 |
| 9 | 825-MUM-2012-CORRESPONDENCE(24-5-2012).pdf | 2018-08-11 |
| 10 | 825-MUM-2012-CORRESPONDENCE(11-5-2012).pdf | 2018-08-11 |
| 11 | 825-mum-2012-form 3.pdf | 2018-09-06 |
| 12 | 825-mum-2012-form 2.pdf.pdf | 2018-09-06 |
| 13 | 825-MUM-2012-FER.pdf | 2018-09-27 |
| 14 | 825-MUM-2012-OTHERS [11-03-2019(online)].pdf | 2019-03-11 |
| 15 | 825-MUM-2012-FER_SER_REPLY [11-03-2019(online)].pdf | 2019-03-11 |
| 16 | 825-MUM-2012-COMPLETE SPECIFICATION [11-03-2019(online)].pdf | 2019-03-11 |
| 17 | 825-MUM-2012-CLAIMS [11-03-2019(online)].pdf | 2019-03-11 |
| 18 | 825-MUM-2012-ABSTRACT [11-03-2019(online)].pdf | 2019-03-11 |
| 19 | 825-MUM-2012-Response to office action [21-02-2020(online)].pdf | 2020-02-21 |
| 20 | 825-MUM-2012-PatentCertificate24-02-2020.pdf | 2020-02-24 |
| 21 | 825-MUM-2012-IntimationOfGrant24-02-2020.pdf | 2020-02-24 |
| 22 | 825-MUM-2012-RELEVANT DOCUMENTS [25-09-2021(online)].pdf | 2021-09-25 |
| 23 | 825-MUM-2012-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 24 | 825-MUM-2012-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 1 | searchresult_27-09-2018.pdf |
| 1 | searchstrategy_27-09-2018.pdf |
| 2 | searchresult_27-09-2018.pdf |
| 2 | searchstrategy_27-09-2018.pdf |