Abstract: Disclosed herein is method and system for determining relationship among text segments in signboards for navigating autonomous vehicles. Images of signboards are captured analyzed to determine text segments in the images. Further, relationship among the identified text segments is determined based on relationship among plurality of text regions forming the text segments. Finally, information related to relationship of the text segments is provided to a navigation unit in the autonomous vehicle for facilitating navigation of the autonomous vehicle. In an embodiment, the method of present disclosure helps in eliminating prospective errors in the text identification process due to irregular arrangement of text segments in the signboards, by determining relationship among localized text segments in the images of the signboards. FIG. 1
Claims:WE CLAIM:
1. A method of determining relationship among text segments in a signboard (105) for navigating an autonomous vehicle (103), the method comprising:
capturing, by a text segment recognition system (101) associated with the autonomous vehicle (103), one or more images (108) of the signboard (105) using one or more image capturing devices (107) associated with the autonomous vehicle (103);
determining, by the text segment recognition system (101), one or more text regions (211), forming plurality of text segments, in each of the one or more images (108);
identifying, by the text segment recognition system (101), one or more text nodes (213) in each of the one or more text regions (211);
determining, by the text segment recognition system (101), relationship among the one or more text regions (211) by identifying relationship among each of the one or more text nodes (213) corresponding to the one or more text regions (211);
clustering, by the text segment recognition system (101), each of the one or more text regions (211) based on the relationship among each of the one or more text regions (211) for determining the relationship among the plurality of text segments in the signboard (105); and
providing, by the text segment recognition system (101), information related to the relationship among the plurality of text segments to a navigation unit, associated with the autonomous vehicle (103), for facilitating navigation of the autonomous vehicle (103).
2. The method as claimed in claim 1, wherein the one or more text regions (211) are determined by processing each of the one or more images (108) using one or more predetermined image processing techniques.
3. The method as claimed in claim 1, wherein identifying the one or more text nodes (213) in each of the one or more text regions (211) comprises:
identifying, by the text segment recognition system (101), one or more predetermined reference positions in each of the one or more text regions (211);
determining, by the text segment recognition system (101), co-ordinates of each of the one or more predetermined reference positions; and
tracing, by the text segment recognition system (101), co-ordinate axes corresponding to the co-ordinates of each of the one or more predetermined reference positions for identifying the one or more text nodes (213).
4. The method as claimed in claim 3, wherein the one or more predetermined reference positions in the one or more text regions (211) comprises a left top-most corner position and a right bottom-most corner position.
5. The method as claimed in claim 1, wherein determining the relationship among the one or more text regions (211) comprises:
computing, by the text segment recognition system (101), a distance matrix of each of the one or more text nodes (213) based on co-ordinates of each of the one or more text nodes (213);
determining, by the text segment recognition system (101), distance between each of the one or more text nodes (213) using the distance matrix; and
identifying, by the text segment recognition system (101), the relationship among each of the one or more text nodes (213) based on the distance between each of the one or more text nodes (213).
6. The method as claimed in claim 5, wherein the one or more text nodes (213) are identified to be related when the distance between the one or more text nodes (213) is less than or equal to a predefined threshold value.
7. A text segment recognition system (101) for determining relationship among text segments in a signboard (105) for navigating an autonomous vehicle (103), the text segment recognition system (101) comprising:
a processor (203); and
a memory (205), communicatively coupled to the processor (203), wherein the memory (205) stores processor-executable instructions, which on execution, cause the processor (203) to:
capture one or more images (108) of the signboard (105) using one or more image capturing devices (107) associated with the autonomous vehicle (103);
determine one or more text regions (211), forming plurality of text segments, in each of the one or more images (108);
identify one or more text nodes (213) in each of the one or more text regions (211);
determine relationship among the one or more text regions (211) by identifying relationship among each of the one or more text nodes (213) corresponding to the one or more text regions (211);
cluster each of the one or more text regions (211) based on the relationship among each of the one or more text regions (211) for determining the relationship among the plurality of text segments in the signboard (105); and
provide information related to the relationship among the plurality of text segments to a navigation unit, associated with the autonomous vehicle (103), for facilitating navigation of the autonomous vehicle (103).
8. The text segment recognition system (101) as claimed in claim 7 is associated with the autonomous vehicle (103).
9. The text segment recognition system (101) as claimed in claim 7, wherein the instructions cause the processor (203) to determine the one or more text regions (211) by processing each of the one or more images (108) using one or more predetermined image processing techniques.
10. The text segment recognition system (101) as claimed in claim 7, wherein to identify the one or more text nodes (213) in each of the one or more text regions (211), the instructions cause the processor (203) to:
identify one or more predetermined reference positions in each of the one or more text regions (211);
determine co-ordinates of each of the one or more predetermined reference positions; and
trace co-ordinate axes corresponding to the co-ordinates of each of the one or more predetermined reference positions to identify the one or more text nodes (213).
11. The text segment recognition system (101) as claimed in claim 10, wherein the one or more predetermined reference positions in the one or more text regions (211) comprises a left top-most corner position and a right bottom-most corner position.
12. The text segment recognition system (101) as claimed in claim 7, wherein to determine the relationship among the one or more text regions (211), the instructions cause the processor (203) to:
compute a distance matrix of each of the one or more text nodes (213) based on co-ordinates of each of the one or more text nodes (213);
determine distance between each of the one or more text nodes (213) using the distance matrix; and
identify the relationship among each of the one or more text nodes (213) based on the distance between each of the one or more text nodes (213).
13. The text segment recognition system (101) as claimed in claim 12, wherein the instructions cause the processor (203) to identify the one or more text nodes (213) as related when the distance between the one or more text nodes (213) is less than or equal to a predefined threshold value.
Dated this 30th day of November 2017
SWETHA S. N
OF K&S PARTNERS
ATTORNEY FOR THE APPLICANT
, Description:TECHNICAL FIELD
The present subject matter is related, in general, to autonomous vehicles, and more particularly, but not exclusively, to a method and system for detecting relationship among text segments in a signboard for navigating an autonomous vehicle.
| # | Name | Date |
|---|---|---|
| 1 | 201741043019-STATEMENT OF UNDERTAKING (FORM 3) [30-11-2017(online)].pdf | 2017-11-30 |
| 2 | 201741043019-REQUEST FOR EXAMINATION (FORM-18) [30-11-2017(online)].pdf | 2017-11-30 |
| 3 | 201741043019-POWER OF AUTHORITY [30-11-2017(online)].pdf | 2017-11-30 |
| 4 | 201741043019-FORM 18 [30-11-2017(online)].pdf | 2017-11-30 |
| 5 | 201741043019-FORM 1 [30-11-2017(online)].pdf | 2017-11-30 |
| 6 | 201741043019-DRAWINGS [30-11-2017(online)].pdf | 2017-11-30 |
| 7 | 201741043019-DECLARATION OF INVENTORSHIP (FORM 5) [30-11-2017(online)].pdf | 2017-11-30 |
| 8 | 201741043019-COMPLETE SPECIFICATION [30-11-2017(online)].pdf | 2017-11-30 |
| 9 | abstract_201741043019.jpg | 2017-12-01 |
| 10 | 201741043019-REQUEST FOR CERTIFIED COPY [01-12-2017(online)].pdf | 2017-12-01 |
| 11 | 201741043019-Proof of Right (MANDATORY) [09-12-2017(online)].pdf | 2017-12-09 |
| 12 | Correspondence by Agent_Form1_13-12-2017.pdf | 2017-12-13 |
| 13 | 201741043019-REQUEST FOR CERTIFIED COPY [12-03-2018(online)].pdf | 2018-03-12 |
| 14 | 201741043019-PETITION UNDER RULE 137 [29-03-2021(online)].pdf | 2021-03-29 |
| 15 | 201741043019-FORM 3 [29-03-2021(online)].pdf | 2021-03-29 |
| 16 | 201741043019-OTHERS [01-04-2021(online)].pdf | 2021-04-01 |
| 17 | 201741043019-FER_SER_REPLY [01-04-2021(online)].pdf | 2021-04-01 |
| 18 | 201741043019-DRAWING [01-04-2021(online)].pdf | 2021-04-01 |
| 19 | 201741043019-CLAIMS [01-04-2021(online)].pdf | 2021-04-01 |
| 20 | 201741043019-FER.pdf | 2021-10-17 |
| 21 | 201741043019-PatentCertificate27-10-2021.pdf | 2021-10-27 |
| 22 | 201741043019-IntimationOfGrant27-10-2021.pdf | 2021-10-27 |
| 23 | 201741043019-PROOF OF ALTERATION [26-01-2022(online)].pdf | 2022-01-26 |
| 24 | 201741043019-RELEVANT DOCUMENTS [30-09-2023(online)].pdf | 2023-09-30 |
| 1 | 2020-10-0116-49-26E_05-10-2020.pdf |