Abstract: A method and system are described for language translation of text in an image. Image having words including at least one character is received. The image is segmented into first strokes by splitting the at least one character at one or more points that represent a change of angle in at least one character. A set of first parameters associated with each of the segmented first strokes is determined. For each of the first strokes, the associated set of first parameters is compared with stored sets of second parameters. Each of the stored sets of second parameters is associated with stored second strokes of one or more character sets, each of the character sets being associated with one or more languages. The words are recognized based on at least one character in the first language. The words in the first language are converted to words in a second language. FIG. 1
Claims:WE CLAIM:
1. A method of language translation of an image by a translation device, the method comprising:
receiving, by the translation device, the image representing one or more words in a first language, the one or more words comprising at least one character, wherein the received image is segmented into one or more first strokes by splitting the at least one character at one or more points that represent a change of angle in the at least one character;
determining, by the translation device, a set of first parameters associated with each of the segmented one or more first strokes;
comparing, by the translation device, for each of the one or more first strokes, the associated set of first parameters with a plurality of stored sets of second parameters, wherein each of the plurality of the stored sets of second parameters is associated with a plurality of stored second strokes of one or more character sets, each of the one or more character sets being associated with one or more languages;
identifying, by the translation device, one or more second strokes from the plurality of stored sets of second strokes, a second stroke of the one or more second strokes corresponding to a first stroke of the one or more first strokes based on the comparison;
identifying, by the translation device, the at least one character based on the identified one or more second strokes;
recognizing, by the translation device, the one or more words based on the identified at least one character in the first language; and
converting, by the translation device, the one or more words in the first language to one or more words in a second language using a translation dictionary.
2. The method as claimed in claim 1 and further comprising:
converting, by the translation device, the one or more words in the second language into voice output.
3. The method as claimed in claim 1, and further comprising representing, by the translation device, the image in a matrix format comprising a plurality of cells.
4. The method as claimed in claim 1, wherein determining the set of first parameters further comprises:
scanning, by the translation device, the plurality of cells to determine, for each of the one or more first strokes, a first cell associated therewith.
5. The method as claimed in claim 4, wherein determining the set of first parameters further comprises:
selecting, by the translation device, for each of the one or more first strokes, the associated first cell to determine, for each of the one or more first strokes, a set of third parameters; and
selecting, by the translation device, for each of the one or more first strokes, a second cell associated therewith to determine, for each of the one or more second strokes, a set of fourth parameters, wherein selecting the second cell is based on selecting the first cell.
6. The method as claimed in claim 5, wherein selecting the second cell further comprises:
scanning, by the translation device, a predetermined number of cells from the plurality of cells adjacent to the first cell;
determining, by the translation device, that the second cell is associated with the corresponding first stroke based on the scanning; and
selecting, by the translation device, the second cell based on the determination.
7. The method as claimed in claim 5, wherein determining the set of first parameters for each of the one or more first strokes further comprises aggregating, by the character recognition device, the corresponding set of third parameters and the set of fourth parameters.
8. The method as claimed in claim 6, wherein the second cell is determined to be associated with the corresponding first stroke when the first stroke extends into the second cell.
9. A translation device for language translation of an image ,comprising:
a processor ; and
a memory communicatively coupled to the processor , wherein the memory stores processor-executable instructions, which, on execution, causes the processor to:
receive the image representing one or more words in a first language, the one or more words comprising at least one character, wherein the received image is segmented into one or more first strokes by splitting the at least one character at one or more points that represent a change of angle in the at least one character
determine a set of first parameters associated with each of the segmented one or more first strokes;
compare for each of the one or more first strokes, the associated set of first parameters with a plurality of stored sets of second parameters, wherein each of the plurality of the stored sets of second parameters is associated with a plurality of stored second strokes of one or more character sets, each of the one or more character sets being associated with one or more languages;
identify one or more second strokes from the plurality of stored sets of second strokes, a second stroke of the one or more second strokes corresponding to a first stroke of the one or more first strokes based on the comparison;
identify the at least one character based on the identified one or more second strokes;
recognize the one or more words based on the identified at least one character in the first language; and
convert the one or more words in the first language to one or more words in a second language using a translation dictionary.
10. The method as claimed in claim 9 and further comprising:
converting, by the translation device, the one or more words in the second language into voice output.
11. The method as claimed in claim 9, and further comprising representing, by the translation device, the image in a matrix format comprising a plurality of cells.
12. The method as claimed in claim 9, wherein determining the set of first parameters further comprises:
scanning, by the translation device, the plurality of cells to determine, for each of the one or more first strokes, a first cell associated therewith.
13. The method as claimed in claim 12, wherein determining the set of first parameters further comprises:
selecting, by the translation device, for each of the one or more first strokes, the associated first cell to determine, for each of the one or more first strokes, a set of third parameters; and
selecting, by the translation device, for each of the one or more first strokes, a second cell associated therewith to determine, for each of the one or more second strokes, a set of fourth parameters, wherein selecting the second cell is based on selecting the first cell.
14. The method as claimed in claim 13, wherein selecting the second cell further comprises:
scanning, by the translation device, a predetermined number of cells from the plurality of cells adjacent to the first cell;
determining, by the translation device, that the second cell is associated with the corresponding first stroke based on the scanning; and
selecting, by the translation device, the second cell based on the determination.
15. The method as claimed in claim 14, wherein determining the set of first parameters for each of the one or more first strokes further comprises aggregating, by the character recognition device, the corresponding set of third parameters and the set of fourth parameters.
16. The method as claimed in claim 15, wherein the second cell is determined to be associated with the corresponding first stroke when the first stroke extends into the second cell.
Dated this 7th day of March, 2017
Swetha SN
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
This disclosure relates generally to text recognition, and more particularly to a method and system for language translation of text in an image.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [07-03-2017(online)].pdf | 2017-03-07 |
| 2 | Form 5 [07-03-2017(online)].pdf | 2017-03-07 |
| 3 | Form 3 [07-03-2017(online)].pdf | 2017-03-07 |
| 4 | Form 18 [07-03-2017(online)].pdf_203.pdf | 2017-03-07 |
| 5 | Form 18 [07-03-2017(online)].pdf | 2017-03-07 |
| 6 | Form 1 [07-03-2017(online)].pdf | 2017-03-07 |
| 7 | Drawing [07-03-2017(online)].pdf | 2017-03-07 |
| 8 | Description(Complete) [07-03-2017(online)].pdf_202.pdf | 2017-03-07 |
| 9 | Description(Complete) [07-03-2017(online)].pdf | 2017-03-07 |
| 10 | PROOF OF RIGHT [19-06-2017(online)].pdf | 2017-06-19 |
| 11 | Correspondence By Agent_Form 30,Form 1_21-06-2017.pdf | 2017-06-21 |
| 12 | 201743007876-FER.pdf | 2020-07-09 |
| 13 | 201743007876-PETITION UNDER RULE 137 [06-01-2021(online)].pdf | 2021-01-06 |
| 14 | 201743007876-OTHERS [06-01-2021(online)].pdf | 2021-01-06 |
| 15 | 201743007876-FORM 3 [06-01-2021(online)].pdf | 2021-01-06 |
| 16 | 201743007876-FER_SER_REPLY [06-01-2021(online)].pdf | 2021-01-06 |
| 17 | 201743007876-DRAWING [06-01-2021(online)].pdf | 2021-01-06 |
| 18 | 201743007876-COMPLETE SPECIFICATION [06-01-2021(online)].pdf | 2021-01-06 |
| 19 | 201743007876-CLAIMS [06-01-2021(online)].pdf | 2021-01-06 |
| 20 | 201743007876-US(14)-HearingNotice-(HearingDate-22-12-2022).pdf | 2022-11-21 |
| 21 | 201743007876-POA [01-12-2022(online)].pdf | 2022-12-01 |
| 22 | 201743007876-FORM 13 [01-12-2022(online)].pdf | 2022-12-01 |
| 23 | 201743007876-Correspondence to notify the Controller [01-12-2022(online)].pdf | 2022-12-01 |
| 24 | 201743007876-AMENDED DOCUMENTS [01-12-2022(online)].pdf | 2022-12-01 |
| 25 | 201743007876-Written submissions and relevant documents [06-01-2023(online)].pdf | 2023-01-06 |
| 26 | 201743007876-PatentCertificate09-03-2023.pdf | 2023-03-09 |
| 27 | 201743007876-IntimationOfGrant09-03-2023.pdf | 2023-03-09 |
| 1 | SearchStrategyMatrixE_08-07-2020.pdf |