Abstract: Disclosed subject matter relates to image processing that includes a method for marking content on a surface of an object using laser. A transformation identification system receives an input image of the object and extracts data corresponding to region of interest in the input image. The extracted data is compared with extracted template data corresponding to regions of interest present in template images of one of one or more template objects. A transformation in position of the object with respect to the position of the one of the one or more template objects is determined, based on the comparison. Finally, an inverse of the transformation is applied to content data that is to be marked at a desired location within the region of interest of the object. The present disclosure analyses the images at a sub-pixel level based on machine learning approach to determine transformation. FIG. 2A
Claims:WE CLAIM:
1. A method for marking content on a surface of an object using laser, the method comprising:
receiving, by a transformation identification system (107), an input image of the object from one or more image sources (103);
extracting, by the transformation identification system (107), data corresponding to region of interest in the input image;
comparing, by the transformation identification system (107), the extracted data with extracted template data corresponding to regions of interest present in one or more template images of one of one or more template objects, wherein the template data is extracted based on machine learning performed on the one or more template images;
determining, by the transformation identification system (107), transformation in a position of the object with respect to position of the one of the one or more template objects, based on the comparison; and
applying, by the transformation identification system (107), an inverse of the transformation to content data that is to be marked at a desired location within the region of interest of the object, wherein the inverse of the transformation is applied prior to marking the content data.
2. The method as claimed in claim 1, wherein the transformation comprises at least one of translation, rotation, and skew.
3. The method as claimed in claim 1 further comprises realigning, by the transformation identification system (107), a laser head of a laser marking device associated with the transformation identification system (107) based on the inverse of the transformation for marking the content data at the desired location.
4. The method as claimed in claim 1, wherein the template data is obtained by:
receiving, by the transformation identification system (107), the one or more template images of the one or more template objects along with the content data, wherein the content data comprises vectors and content attributes corresponding to a content to be marked on a surface of the template object;
extracting, by the transformation identification system (107), the template data corresponding to the regions of interest, based on analysis of contours of each of the one or more template objects present in each of the one or more template images using one or more image processing algorithms; and
storing, by the transformation identification system (107), the template data along with the content data in a mark file having a predefined format.
5. The method as claimed in claim 4, wherein the mark file is transmitted to a laser marking device associated with the transformation identification system (107) for marking the content data on the object.
6. The method as claimed in claim 4, wherein the image processing algorithms are based on at least one of a Matrox Imaging Library (MIL) blob analysis, an image calibration operation, an image edge finder algorithm, a geometric model finder algorithm measurement, metrology, and/or one or more pattern matching operations.
7. The method as claimed in claim 1, wherein the data and the template data comprise information related to at least one of edges and/or contours of the object and the one or more template objects, respectively.
8. The method as claimed in claim 1, wherein the comparison is performed by the transformation identification system (107) at a sub-pixel level.
9. A transformation identification system (107) for marking content on a surface of an object using laser, the transformation identification system (107) comprising:
a processor (109); and
a memory (113) communicatively coupled to the processor (109), wherein the memory (113) stores the processor-executable instructions, which, on execution, causes the processor (109) to:
receive an input image of the object from one or more image sources (103);
extract data corresponding to region of interest in the input image;
compare the extracted data with extracted template data corresponding to regions of interest present in one or more template images of one of one or more template objects, wherein the template data is extracted based on machine learning performed on the one or more template images;
determine transformation in a position of the object with respect to the position of the one of the one or more template objects, based on the comparison; and
apply an inverse of the transformation to content data that is to be marked at a desired location within the region of interest of the object, wherein the inverse of the transformation is applied prior to marking the content data.
10. The transformation identification system (107) as claimed in claim 9, wherein the transformation comprises at least one of translation, rotation and skew.
11. The transformation identification system (107) as claimed in claim 9, wherein the processor (109) is further configured to realign a laser head of a laser marking device associated with the transformation identification system (107) based on the inverse of the transformation for marking the content data at the desired location.
12. The transformation identification system (107) as claimed in claim 9, wherein to obtain the template data, the instructions cause the processor (109) to:
receive the one or more template images of the one or more template objects along with the content data, wherein the content data comprises vectors and content attributes corresponding to a content to be marked on a surface of the template object;
extract the template data corresponding to the regions of interest, based on analysis of contours of each of the one or more template objects present in each of the one or more template images using one or more image processing algorithms; and
store the template data along with the content data in a mark file having a predefined format.
13. The transformation identification system (107) as claimed in claim 12, wherein the processor (109) transmits the mark file to a laser marking device associated with the transformation identification system (107) for marking the content data on the object.
14. The transformation identification system (107) as claimed in claim 12, wherein the image processing algorithms are based on at least one of a Matrox Imaging Library (MIL) blob analysis, an image calibration operation, an image edge finder algorithm, a geometric model finder algorithm measurement, metrology, and/or one or more pattern matching operations.
15. The transformation identification system (107) as claimed in claim 9, wherein the data and the template data comprise information related to at least one of edges and/or contours of the object and the one or more template objects, respectively.
16. The transformation identification system (107) as claimed in claim 9, wherein the processor (109) performs the comparison at a sub-pixel level.
Dated this 4th day of March 2017
SWETHA S.N
OF K & S PARTNERS
AGENT FOR THE APPLICANT
, Description:TECHNICAL FIELD
The present subject matter relates generally to image processing, and more particularly, but not exclusively to a method and system for marking content on the surface of an object using laser.
| # | Name | Date |
|---|---|---|
| 1 | Form 5 [04-03-2017(online)].pdf | 2017-03-04 |
| 2 | Form 3 [04-03-2017(online)].pdf | 2017-03-04 |
| 3 | Form 18 [04-03-2017(online)].pdf_853.pdf | 2017-03-04 |
| 4 | Form 18 [04-03-2017(online)].pdf | 2017-03-04 |
| 5 | Form 1 [04-03-2017(online)].pdf | 2017-03-04 |
| 6 | Drawing [04-03-2017(online)].pdf | 2017-03-04 |
| 7 | Description(Complete) [04-03-2017(online)].pdf_852.pdf | 2017-03-04 |
| 8 | Description(Complete) [04-03-2017(online)].pdf | 2017-03-04 |
| 9 | REQUEST FOR CERTIFIED COPY [07-03-2017(online)].pdf | 2017-03-07 |
| 10 | Form 26 [07-03-2017(online)].pdf | 2017-03-07 |
| 11 | Request For Certified Copy-Online.pdf | 2017-03-10 |
| 12 | 201741007702-REQUEST FOR CERTIFIED COPY [14-07-2017(online)].pdf | 2017-07-14 |
| 13 | 201741007702-Proof of Right (MANDATORY) [11-12-2017(online)].pdf | 2017-12-11 |
| 14 | Correspondence by Agent_Form1_13-12-2017.pdf | 2017-12-13 |
| 15 | 201741007702-FER.pdf | 2020-02-10 |
| 16 | 201741007702-PETITION UNDER RULE 137 [03-08-2020(online)].pdf | 2020-08-03 |
| 17 | 201741007702-OTHERS [03-08-2020(online)].pdf | 2020-08-03 |
| 18 | 201741007702-FORM 3 [03-08-2020(online)].pdf | 2020-08-03 |
| 19 | 201741007702-FER_SER_REPLY [03-08-2020(online)].pdf | 2020-08-03 |
| 20 | 201741007702-CORRESPONDENCE [03-08-2020(online)].pdf | 2020-08-03 |
| 21 | 201741007702-COMPLETE SPECIFICATION [03-08-2020(online)].pdf | 2020-08-03 |
| 22 | 201741007702-CLAIMS [03-08-2020(online)].pdf | 2020-08-03 |
| 23 | 201741007702-US(14)-HearingNotice-(HearingDate-19-05-2022).pdf | 2022-04-25 |
| 24 | 201741007702-POA [04-05-2022(online)].pdf | 2022-05-04 |
| 25 | 201741007702-FORM 13 [04-05-2022(online)].pdf | 2022-05-04 |
| 26 | 201741007702-Correspondence to notify the Controller [04-05-2022(online)].pdf | 2022-05-04 |
| 27 | 201741007702-AMENDED DOCUMENTS [04-05-2022(online)].pdf | 2022-05-04 |
| 28 | 201741007702-Written submissions and relevant documents [27-05-2022(online)].pdf | 2022-05-27 |
| 29 | 201741007702-PatentCertificate17-10-2022.pdf | 2022-10-17 |
| 30 | 201741007702-IntimationOfGrant17-10-2022.pdf | 2022-10-17 |
| 1 | 2020-02-0716-14-51_07-02-2020.pdf |