Abstract: The method and system of present disclosure relate to facilitating identification of a layout of user interface. The method includes receiving plurality of screenshots of plurality of user-interfaces. From each of the plurality of screenshots, text elements and their corresponding actionable elements are extracted. Further, the system identifies properties of the actionable elements in each of the plurality of screenshots which indicates the functionality of the actionable elements. Based on the properties, the system further associates the text elements, of one screenshot associated with one user-interface, with the actionable elements, of another screenshot associated with another user-interface. Further, the system creates clusters text elements and the one or more actionable elements based on the association. The clusters facilitate in the identification of the layout of the user interface. This way, the system optimizes the automation of the application by eliminating the requirement of modifying the source code of the application. FIG. 1
Claims:We Claim:
1. A method of facilitating identification of a layout of a user interface, the method comprising:
receiving, by a layout identification system (102), plurality of screenshots (101) of plurality of user-interfaces, wherein each screenshot of the plurality of screenshots (101) indicate a phase associated with a workflow of an application;
extracting, by the layout identification system (102), one or more text elements (212) and one or more corresponding actionable elements (214) from each of the plurality of screenshots (101);
identifying, by the layout identification system (102), properties of the one or more actionable elements (214) in each of the plurality of screenshots (101), wherein the properties indicates functionality of the one or more actionable elements (214);
associating, by the layout identification system (102), the one or more text elements (212), of one screenshot associated with one of the plurality of user-interfaces, with the one or more actionable elements (214), of another screenshot associated with another user-interface of the plurality of user-interfaces, based on the properties; and
creating, by the layout identification system (102), one or more clusters of the one or more text elements (212) and the one or more actionable elements (214) based on the association, wherein the one or more clusters facilitates in identification of the layout of the user interface.
2. The method as claimed in claim 1, wherein:
the one or more text elements (212) comprises at least one of text labels, anchor links and paragraphs, and
the one or more corresponding actionable elements (214) comprises at least one of radio buttons, check boxes, menu, submenus, text boxes, drop boxes and icons.
3. The method as claimed in claim 1, wherein the properties comprises at least one of a text enabled, text disabled, checked, and unchecked associated with the one or more corresponding actionable elements (214).
4. The method as claimed in claim 1, further comprising learning about similarity between the one or more actionable elements (214), of the one user-interface, with the one or more actionable elements (214), of another user-interface, based on the one or more clusters.
5. The method as claimed in claim 4, further comprising training the layout identification system (102) based on the learning for identifying a layout of a new user interface.
6. The method as claimed in claim 1, further comprising automating the application based on the identification of the layout of the user interface, wherein the application comprises at least one of an online ticketing application, login screens, and SAP applications.
7. The method as claimed in claim 1, wherein the one or more text elements (212) are extracted by:
converting the plurality of screenshots (101) into a binary image by using adaptive binarization,
identifying contours from the binary image in order to detect regions outlined against background of the binary image, and
applying a text detection technique and an optical character recognition technique upon the regions outlined against background in order to recognize the one or more text elements (212).
8. The method as claimed in claim 1, wherein the one or more actionable elements (214) are extracted by:
segmenting the plurality of screenshots (101) into one or more overlapping patches,
passing the one or more overlapping patches through a segmentation model in order to generate one or more segmented images,
generating a single image by merging the one or more segmented images, and
applying a stitching technique upon the single image.
9. A layout identification system (102) for facilitating identification of a layout of a user interface, the system (102) comprising:
a processor (204); and
a memory (206) communicatively coupled to the processor (204), wherein the memory (206) stores processor-executable instructions, which, on execution, causes the processor (204) to:
receive plurality of screenshots (101) of plurality of user-interfaces, wherein each screenshot of the plurality of screenshots (101) indicate a phase associated with a workflow of an application;
extract one or more text elements (212) and one or more corresponding actionable elements (214) from each of the plurality of screenshots (101);
identify properties of the one or more actionable elements (214) in each of the plurality of screenshots (101), wherein the properties indicate functionality of the one or more actionable elements (214);
associate the one or more text elements (212), of one screenshot associated with one of the plurality of user-interfaces, with the one or more actionable elements (214), of another screenshot associated with another user-interface of the plurality of user-interfaces, based on the properties; and
create one or more clusters of the one or more text elements (212) and the one or more actionable elements (214) based on the association, wherein the one or more clusters facilitates in identification of the layout of the user interface.
10. The layout identification system (102) as claimed in claim 9, wherein:
the one or more text elements (212) comprises at least one of text labels, anchor links and paragraphs, and
the one or more corresponding actionable elements (214) comprises at least one of radio buttons, check boxes, menu, submenus, text boxes, drop boxes and icons.
11. The layout identification system (102) as claimed in claim 9, wherein the properties comprises at least one of a text enabled, text disabled, checked, and unchecked associated with the one or more corresponding actionable elements (214).
12. The layout identification system (102) as claimed in claim 9, is configured to learn about similarity between the one or more actionable elements (214), of the one user-interface, with the one or more actionable elements (214), of another user-interface, based on the one or more clusters.
13. The layout identification system (102) as claimed in claim 12, is further configured to be trained, based on the learning, for identifying a layout of a new user interface.
14. The layout identification system (102) as claimed in claim 9, is further configured to automate the application based on the identification of the layout of the user interface, wherein the application comprises at least one of an online ticketing application, login screens, and SAP applications.
15. The layout identification system (102) as claimed in claim 9, extracts the one or more text elements (212) by:
converting the plurality of screenshots (101) into a binary image by using adaptive binarization,
identifying contours from the binary image in order to detect regions outlined against background of the binary image, and
applying a text detection technique and an optical character recognition technique upon the regions outlined against background in order to recognize the one or more text elements (212).
16. The layout identification system (102) as claimed in claim 9, extracts the one or more actionable elements (214) by:
segmenting the plurality of screenshots (101) into one or more overlapping patches,
passing the one or more overlapping patches through a segmentation model in order to generate one or more segmented images,
generating a single image by merging the one or more segmented images, and
applying a stitching technique upon the single image.
Dated this 31st day of May, 2017
Swetha SN
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
The present subject matter is related, in general to user interface layout and more particularly, to a method and system for facilitating identification of a layout of the user interface.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [31-05-2017(online)].pdf | 2017-05-31 |
| 2 | Form 5 [31-05-2017(online)].pdf | 2017-05-31 |
| 3 | Form 3 [31-05-2017(online)].pdf | 2017-05-31 |
| 4 | Form 18 [31-05-2017(online)].pdf_45.pdf | 2017-05-31 |
| 5 | Form 18 [31-05-2017(online)].pdf | 2017-05-31 |
| 6 | Form 1 [31-05-2017(online)].pdf | 2017-05-31 |
| 7 | Drawing [31-05-2017(online)].pdf | 2017-05-31 |
| 8 | Description(Complete) [31-05-2017(online)].pdf_44.pdf | 2017-05-31 |
| 9 | Description(Complete) [31-05-2017(online)].pdf | 2017-05-31 |
| 10 | REQUEST FOR CERTIFIED COPY [01-06-2017(online)].pdf | 2017-06-01 |
| 11 | abstract 201741019160.jpg | 2017-06-01 |
| 12 | 201741019160-Proof of Right (MANDATORY) [01-09-2017(online)].pdf | 2017-09-01 |
| 13 | Correspondence By Agent_Form30,Form1_05-09-2017.pdf | 2017-09-05 |
| 14 | 201741019160-FER.pdf | 2020-06-30 |
| 15 | 201741019160-FER_SER_REPLY [03-12-2020(online)].pdf | 2020-12-03 |
| 16 | 201741019160-DRAWING [03-12-2020(online)].pdf | 2020-12-03 |
| 17 | 201741019160-CORRESPONDENCE [03-12-2020(online)].pdf | 2020-12-03 |
| 18 | 201741019160-COMPLETE SPECIFICATION [03-12-2020(online)].pdf | 2020-12-03 |
| 19 | 201741019160-CLAIMS [03-12-2020(online)].pdf | 2020-12-03 |
| 20 | 201741019160-US(14)-HearingNotice-(HearingDate-07-09-2023).pdf | 2023-08-18 |
| 21 | 201741019160-POA [28-08-2023(online)].pdf | 2023-08-28 |
| 22 | 201741019160-FORM 13 [28-08-2023(online)].pdf | 2023-08-28 |
| 23 | 201741019160-Correspondence to notify the Controller [28-08-2023(online)].pdf | 2023-08-28 |
| 24 | 201741019160-AMENDED DOCUMENTS [28-08-2023(online)].pdf | 2023-08-28 |
| 25 | 201741019160-Written submissions and relevant documents [21-09-2023(online)].pdf | 2023-09-21 |
| 26 | 201741019160-PETITION UNDER RULE 137 [21-09-2023(online)].pdf | 2023-09-21 |
| 27 | 201741019160-FORM 3 [21-09-2023(online)].pdf | 2023-09-21 |
| 28 | 201741019160-PatentCertificate20-10-2023.pdf | 2023-10-20 |
| 29 | 201741019160-IntimationOfGrant20-10-2023.pdf | 2023-10-20 |
| 1 | SEARCHE_30-06-2020.pdf |
| 2 | search08AE_25-02-2021.pdf |