Abstract: A method for testing test devices (104A-N) having different characteristics includes capturing first and second set of images including application screens rendered by a reference device (102) and a test device (104A), respectively. The method identifies objects and associated contours present in a test screen (500) and the reference screen (300), and identifies differences therebetween. The identified differences are transmitted to a defect management system (130) that provides options to indicate if the differences are to be considered defects or valid changes. The defect management system (130) automatically triggers the reference device (102) to render an updated reference screen (300) for capture by an image capturing unit when selected options indicate that at least one of the differences is to be considered a valid change. The captured mage of the updated reference screen (300) is used as the reference screen for subsequent test executions to reduce rework.
Claims:
1. A method for testing a plurality of test devices (104A-N) having different characteristics, comprising:
capturing a first set of images and a second set of images comprising one or more application screens rendered by a reference device (102) and a test device (104A), respectively, using an image-capturing unit (106) when one or more automated test scripts are executed by a test automation system (100) in the reference device (102) and the test device (104A);
identifying one or more objects that are present in a test screen (500) based on previously learnt patterns representative of one or more objects present in a reference screen (300) using a pattern matching system (120);
determining corresponding contours of the identified objects in the test screen (500) using a contour based navigation system (122);
identifying one or more differences between the test screen (500) and the reference screen (300) based on one or more differences between the contours of the objects in the reference screen (300) and contours of the objects in the test screen (500) using the contour based navigation system (122);
transmitting the one or more identified differences to a defect management system (130) using one or more application programming interface calls;
providing one or more selectable options to indicate if the one or more identified differences are to be considered a defect or a valid change by the defect management system (130);
triggering the reference device (102) to render an updated reference screen (300) by the defect management system (130) when the selected options indicate that at least one of the identified differences is to be considered a valid change, further triggering the image-capturing unit (106) to capture an image of the updated reference screen (300); and
configuring the test automation system (100) to use the captured image as the updated reference screen (500) for subsequent executions of the one or more automated test scripts.
2. The method as claimed in claim 1, further comprising:
updating at least a portion of one or more navigation files stored in a database (116) of the test automation system (100) when the selected options indicate that at least one of the one or more identified differences is to be considered a valid change;
identifying a target object (306J) in the test screen (500) to navigate to next based on the navigation files;
comparing text associated with the target object (306J) with text within each of a first set of contours present in a perceptible region (512) of the test screen (500) to identify if the target object (306) is present in the perceptible region (512);
scrolling the test screen (500) to cause a second set of objects to move from an imperceptible region (514) of the test screen (500) to the perceptible region (512) when the target object (306J) is identified to be absent from the perceptible region (512);
arranging the second set of contours in a designated order to obtain a list of arranged contours upon identifying that the target object (306J) is present within one of the second set of contours;
identifying a position of the target object (306J) in the list of arranged contours; and
navigating to the target object (306) automatically based on the identified position and selecting the target object (306) to execute associated functionality testing.
3. The method as claimed in claim 1, wherein the different characteristics comprise one or more of device models, screen sizes, and operating platforms.
4. The method as claimed in claim 1, wherein the objects in the reference screen (300) and in the test screen (500) comprise one or more of menu items, buttons, text fields, list boxes, combo boxes, checkboxes, logos, tables, and grids.
5. The method as claimed in claim 2, wherein the one or more identified differences between the reference screen (300) rendered by the reference device (102) and the test screen (500) rendered by the test device (104A) comprises:
an offset object (308) that is disposed at a first location (312) in the reference screen (300) and at a second location (510) in the test screen (500), wherein the first location (312) is located at a relative offset from the second location (510);
presence of a newly added object (501) in the test screen (500) that is absent from the reference screen (300); and
absence of an object (306L) in the test screen (500) that is present in the reference screen (300).
6. A test automation system (100, 110) for testing a plurality of test devices (104A-N) having different characteristics, comprising:
a reference device (102)configured to execute an application (114), wherein the same application (114) is also executed in a test device (104A) selected from the plurality of test devices (104A-N);
an image-capturing unit (106) positioned to capture at least one reference screen (300) rendered by the reference device (102) and a corresponding test screen (500) rendered by the test device (104A) when the application (114) is executed in the reference device(102) and the test device (104A);
a defect management system (130);
a pattern matching system (120) operatively coupled to the image-capturing unit (106) and configured to identify one or more objects that are present in the test screen (500) based on previously-learnt patterns representative of objects present in the reference screen (300);
a contour based navigation system (122) operatively coupled to one or more of the image-capturing unit (106), the pattern matching system (120) and the defect management system (130), and configured to:
determine contours of the identified objects in the test screen (500);
identify one or more differences between the test screen (500) and the reference screen (300) based on one or more differences between pre-learnt contours of the objects in the reference screen (300) and the identified contours of the objects in the test screen (500);
transmit the identified differences to the defect management system (130) using one or more application programming interface calls;
wherein the defect management system (130) is configured to:
provide one or more selectable options to indicate if the one or more identified differences are to be considered a defect or a valid change;
trigger the reference device (102) to render an updated reference screen (300) when the selected options indicate that at least one of the identified differences is to be considered a valid change, further triggering the image-capturing unit (106) to capture an image of the updated reference screen (300); and
configure the test automation system (100, 110) to use the captured image as the reference screen for subsequent test executions.
7. The test automation system (100, 110) as claimed in claim 6 further configured to:
update at least a portion of one or more navigation files stored in a database (116) of the test automation system (100) when the selected options indicate that at least one of the one or more identified differences is to be considered a valid change;
identify a target object (306J) in the test screen (500) to navigate to next based on the navigation files;
compare text associated with the target object (306J) with text within each of a first set of contours present in a perceptible region (512) of the test screen (500) to identify if the target object (306) is present in the perceptible region (512);
scroll the test screen (500) to cause a second set of objects to move from an imperceptible region (514) in the test screen (500) to the perceptible region (512) when the target object (306J) is identified to be absent from the perceptible region (512);
arrange the second set of contours in a designated order to obtain a list of arranged contours upon identifying that the target object (306J) is present within one of the second set of contours;
identify a position of the target object (306J) in the list of arranged contours; and
navigate to the target object (306) automatically based on the identified position and select the target object (306) to execute associated functionality testing.
8. The test automation system (100, 110) as claimed in claim 6, wherein the one or more characteristics comprise device models, screen sizes, and operating platforms.
9. The test automation system (100) as claimed in claim 7, wherein the one or more identified differences comprise:
an offset object (308) that is disposed at a first location (312) in the reference screen (300) and at a second location (510) in the test screen (500), wherein the first location (312) is located at a relative offset from the second location (510);
presence of a newly added object (501) in the test screen (500) that is absent from the reference screen (300); and
absence of an object (306L) in the test screen (500) that is present in the reference screen (300).
10. The test automation system (100, 110) as claimed in claim 9, wherein the test automation system (100) is configured to test the plurality of multimedia devices (104A-N) comprising one or more of smart televisions, mobile phones, tablets, laptops, and desktop computers.
, Description:
BACKGROUND
[0001] Embodiments of the present disclosure relate generally to a test automation system. More particularly, the present disclosure relates to a test automation system for testing different types of multimedia devices using a contour based navigation approach.
[0002] There are many multimedia devices available in the market such as a smart television (TV), a mobile phone, a tablet, a laptop, and a desktop computer. These multimedia devices differ in their characteristics, such as their models, screen sizes, and/or operating platforms, and therefore, entail different testing requirements. Accordingly, during testing, it may be imperative to account for the different screen sizes of the devices that may affect the manner in which user interface (UI) screens associated with an application are displayed on these devices. For example, the manner in which a UI screen associated with the YouTube® application is displayed on a smartphone display would often be different from the same UI screen displayed on a smart TV display, and therefore, would necessitate inclusion and maintenance of additional test case scenarios.
[0003] Presently, there are certain test automation solutions available in the market that typically test only a specific type of multimedia device. Further, such existing test automation solutions mostly use an image comparison method for testing a particular multimedia device’s capability to appropriately render UI screens associated with an application. For example, the existing test automation solutions may be designed to automatically test a particular model of a smart TV running on the Android platform and having a screen size of 30 inches to identify whether the smart TV properly renders UI screens of an application under test.
[0004] To that end, the existing test automation solutions first execute scripts associated with testing graphical user interfaces (GUIs) of the application under test in a reference-generating device, which is a sample of the smart TV to be tested. The existing test automation solutions capture a set of reference images having UI screens associated with the application under test and receive user inputs including locations of various UI elements in the UI screens. For example, certain existing test automation solutions receive and store locations of logos, menu items, buttons, text fields, list and combo boxes, checkboxes, tables, grids, static text, etc. in the UI screens.
[0005] When testing the smart TV, the existing test automation solutions execute scripts associated with testing GUIs of the application under test and capture associated UI screens. The existing test automation solutions then perform pixel-wise image comparison between reference UI screens and test UI screens captured with the actual device at corresponding pre-stored locations to identify if there is a match between UI elements at the pre-stored locations of the reference and test UI screens.
[0006] However, test automation solutions that perform pixel-wise image comparison have certain limitations. For example, as noted previously, such test automation solutions can test only a particular type of multimedia device. More specifically, the existing test automation solutions cannot test a smart TV that has a screen size of 50 inches using a set of reference images that are captured with a smart TV having a screen size of 30 inches, as locations of UI elements vary due to a difference in screen size. For testing the smart TV having the screen size of 50 inches, the existing test automation solutions need another set of reference images and location information of corresponding UI elements.
[0007] Another shortcoming of the existing test automation solutions that perform pixel-wise image comparison is that testing of different multimedia devices needs different technical test scripts to suit corresponding UI rendering capabilities that would allow the application to be navigated and tested in an automated way. Therefore, any change even in a single UI screen necessitates changes to different sets of the technical test scripts developed to automatically test UI rendering capabilities of different multimedia devices, leading to a considerable increase in rework effort, test cycle time, and cost.
[0008] Hence, there is a need for an improved test automation system and an associated method for testing different types of multimedia devices that differ in their characteristics without needing significant rework on test scripts.
BRIEF DESCRIPTION
[0009] It is an objective of the present disclosure to provide a method for testing a plurality of test devices having different characteristics. The method includes capturing a first set of images and a second set of images including one or more application screens rendered by a reference device and a test device, respectively, using an image-capturing unit when one or more automated test scripts are executed by a test automation system in the reference device and the test device. Objects that are present in a test screen are identified based on previously learnt patterns representative of one or more objects present in a reference screen using a pattern matching system. Contours corresponding to the identified objects in the test screen are determined using a contour based navigation system. One or more differences between the test screen and the reference screen are identified based on one or more differences between pre-learnt contours of the objects in the reference screen and the identified contours of the objects in the test screen using the contour based navigation system. The one or more identified differences are transmitted to a defect management system using one or more application programming interface calls. One or more selectable options are provided by the defect management system to indicate if the one or more identified differences are to be considered a defect or a valid change. The method further includes triggering the reference device by the defect management system to render an updated reference screen when the selected options indicate that at least one of the identified differences is to be considered a valid change. The method further includes triggering the image-capture unit to capture an image of the updated reference screen. The test automation system is then configured to use the captured image as the updated reference screen for subsequent executions of the one or more automated test scripts.
[0010] According to an aspect of the present disclosure, at least a portion of one or more navigation files stored in a database of the test automation system are updated when the selected options indicate that at least one of the one or more identified differences is to be considered a valid change. A target object in the test screen is identified to navigate to next based on the navigation files. Text associated with the target object is compared with text within each of a first set of contours present in a perceptible region of the test screen to identify if the target object is present in the perceptible region.
[0011] The test screen is scrolled to cause a second set of objects to move from an imperceptible region in the test screen to the perceptible region when the target object is identified to be absent from the perceptible region. The second set of contours is arranged in a designated order to obtain a list of arranged contours upon identifying that the target object is present within one of the second set of contours. A position of the target object in the list of arranged contours is identified and is used to automatically navigate to the target object to execute associated functionality testing.
[0012] According to certain aspects of the present disclosure, the different characteristics include one or more of device models, screen sizes, and operating platforms.
[0013] According to certain aspects of the present disclosure, the objects in the reference screen and in the test screen include one or more of menu items, buttons, text fields, list boxes, combo boxes, checkboxes, logos, tables, and grids. The identified differences between the reference screen rendered by the reference device and the test screen rendered by the test device include one or more of an offset object that is disposed at a first location in the reference screen and at a second location in the test screen rendered by the test device, where the first location is located at a relative offset from the second location. The identified differences further include presence of a newly added object in the test screen that is absent from the reference screen, and absence of an object in the test screen that is present in the reference screen.
[0014] It is an objective of the present disclosure to provide a test automation system for testing a plurality of test devices having different characteristics. The test automation system includes a reference device configured to execute an application, wherein the same application is executed in a test device selected from the plurality of test devices. The test automation system further includes an image-capturing unit positioned to capture at least one reference screen rendered by the reference device and a corresponding test screen rendered by the test device when the application is executed in the reference device and the test device. The system further includes a defect management system, and a pattern matching system operatively coupled to the image-capturing unit and configured to identify one or more objects that are present in the test screen based on previously learnt patterns representative of objects present in the reference screen. The system also includes a contour based navigation system operatively coupled to one or more of the image-capturing unit, the pattern matching system and the defect management system. The contour based navigation system is configured to determine contours of the identified objects in the test screen, and identify one or more differences between the test screen and the reference screen based on one or more differences between pre-learnt contours of the objects in the reference screen and the identified contours of the objects in the test screen The contour based navigation system is also configured to transmit the identified differences to the defect management system using one or more application programming interface calls. Further, the defect management system is configured to provide one or more selectable options to indicate if the one or more identified differences are to be considered a defect or a valid change. The defect management system is also configured to trigger the reference device to render an updated reference screen when the selected options indicate that at least one of the identified differences is to be considered a valid change, further triggering the image-capturing unit to capture an image of the updated reference screen. Additionally, the defect management system is configured to configure the test automation system (100, 110) to use the captured image as the reference screen for subsequent test executions.
[0015] According to aspects of the present disclosure, the test automation system is configured to update at least a portion of one or more navigation files stored in a database of the test automation system when the selected options indicate that at least one of the one or more identified differences is to be considered a valid change. The test automation system is further configured to identify a target object in the test screen to navigate to next based on the navigation files, and compare text associated with the target object with text within each of a first set of contours present in a perceptible region of the test screen to identify if the target object is present in the perceptible region. The test automation system is also configured to scroll the test screen to cause a second set of objects to move from an imperceptible region in the test screen to the perceptible region when the target object is identified to be absent from the perceptible region. The test automation system is also configured to arrange the second set of contours in a designated order to obtain a list of arranged contours upon identifying that the target object is present within one of the second set of contours. Moreover, the test automation system is also configured to identify a position of the target object (306J) in the list of arranged contours, and navigate to the target object (306) automatically based on the identified position and select the target object (306) to execute associated functionality testing.
[0016] According to one aspect of the present disclosure, the different characteristics comprise device models, screen sizes, and operating platforms.
[0017] Further, the identified differences comprise an offset object that is disposed at a first location in the reference screen and at a second location in the test screen, wherein the first location is located at a relative offset from the second location. The identified differences further include presence of a newly added object in the test screen that is absent from the reference screen, and absence of an object in the test screen that is present in the reference screen.
[0018] According to certain aspects of the present disclosure, the test automation system is configured to test the plurality of multimedia devices comprising one or more of smart televisions, mobile phones, tablets, laptops, and desktop computers.
DRAWINGS
[0019] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0020] FIG. 1 is a block diagram illustrating an exemplary test automation system, in accordance with aspects of the present disclosure;
[0021] FIG. 2 is a flow diagram illustrating an exemplary method for generating a set of reference images using the test automation system of FIG. 1, in accordance with aspects of the present disclosure;
[0022] FIG. 3 is an exemplary view of a user interface screen rendered by a reference device associated with the test automation system of FIG. 1, in accordance with aspects of the present disclosure;
[0023] FIGS. 4A-B are flow diagrams illustrating an exemplary method for testing one or more test devices using the test automation system of FIG. 1, in accordance with aspects of the present disclosure; and
[0024] FIG. 5 is an exemplary view illustrating a user interface screen rendered by a test device associated with the test automation system of FIG. 1, in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0025] The following description presents an exemplary system and method for automatically testing user interface (UI)-rendering capabilities of different types of multimedia devices that differ in various characteristics. Particularly, the embodiments presented herein describe a test automation system that tests UI rendering capabilities of different types of multimedia devices using the same set of reference images generated using one particular type of multimedia device and without needing to develop different device-specific test scripts.
[0026] As noted previously, existing test automation solutions typically test only a specific type of multimedia device and often use a pixel-wise image comparison method, which makes such solutions onerous and time-consuming when testing UI rendering capabilities of many different types of multimedia devices. For example, existing test automation solutions would fail to test UI rendering capabilities of a particular model of a smart TV running on the Linux platform using a set of reference images and automated test scripts that are typically used for testing another model of a smart TV running on the Android platform. For testing the particular model of the Linux smart TV, test engineers need to develop another set of automated test scripts that are different from test scripts used for testing the smart TV running on the Android platform.
[0027] Further, existing test automation solutions need to use another set of reference images for testing the particular model of the Linux smart TV. Thus, existing test automation solutions require different technical test scripts and different set of reference images for testing different multimedia devices, leading to a considerable increase in test scripts development effort, test scripts rework effort, test cycle time, and cost.
[0028] Further, UI screens associated with an application may frequently change during a development phase of the application. For example, positions of objects in the UI screens frequently change from one version to another version of the application, few objects may be newly added in the UI screens, and few existing objects may be removed from the UI screens. Any such changes, even in a single UI screen, necessitates capturing a new set of reference images and changes to different sets of technical test scripts developed to automatically test UI rendering capabilities of different multimedia devices.
[0029] Unlike such existing systems, embodiments of the present test automation system and method enable testing different types of devices that differ in their characteristics, such as but are not limited to, models, screen sizes, and operating platforms using the same set of reference images and without the need to develop device-specific test scripts. It may be noted that embodiments of the present test automation system may be used, for example, for testing screen-rendering capabilities of different types of devices including smart televisions, set-top-boxes, mobile phones, tablets, laptops, and desktop computers.
[0030] For example, the test automation system can be used for testing different types of multimedia devices to identify if the multimedia devices appropriately render display screens associated with a television (TV) channel. The test automation system can be used to identify the presence or absence of objects expected to be present in the display screens rendered by the multimedia devices. Examples of such objects include a logo associated with the TV channel, a name of a program currently being broadcasted, and text boxes including news flash. The test automation system can also be used to identify if the objects are at expected locations. For example, the test automation system identifies if the logo of the TV channel that is expected to be present at the right corner of the display screens is actually present at the right corner in the display screens rendered by the multimedia devices.
[0031] In another example, the test automation system can be used for testing different types of multimedia devices to identify if the multimedia devices appropriately render UI screens associated with a particular application. For example, the test automation system can be used to identify presence and/or absence of various UI objects such as a “submit” button and locations of such objects in the UI screens rendered by the multimedia devices. However, for clarity, the present disclosure describes an embodiment of the test automation system in the context of testing UI rendering capabilities of different models of smart televisions, as depicted and described in detail with reference to FIG. 1.
[0032] FIG. 1 is a block diagram illustrating an exemplary test automation system (100), in accordance with aspects of the present disclosure. In certain embodiments, the test automation system (100) employs a reference device (102) to generate a single set of reference images for testing one or more test devices (104A-N). To that end, the test automation system (100) includes an image-capturing unit (106), a continuous integration server (108), and a processing device (110), connected to the reference device (102) and the test devices (104A-N) via a communications network (112).
[0033] In one embodiment, the reference device (102) is a multimedia device that is capable of rendering display screens associated with an application. Examples of the reference device (102) include, but are not limited to, a particular model of a smart TV, a mobile phone, a set-top-box, a tablet, a desktop computer, and a laptop. Further, the reference device (102) includes an application (114) whose UI screens are to be tested across the one or more test devices (104A-N). For instance, an example of the reference device (102) having the application (114) includes a particular model of a smart TV having a screen size of 55 inches and running on the Android platform.
[0034] In one embodiment, the test automation system (100) is configured to generate a set of reference images using the reference device (102). To that end, the image-capturing unit (106) is positioned and is oriented such that the image-capturing unit (106) faces a display screen of the reference device (102). An example of the image-capturing unit (106) includes a camera. Further, the reference device (102) has the application (114) preinstalled and is communicatively coupled to the processing device (110) for generating and sharing the set of reference images with the test automation system (100).
[0035] The processing device (110) stores one or more automated test scripts in an associated database (116) and executes the automated test scripts using a script execution system (118). Execution of the automated test scripts simulates one or more user actions associated with user interface (UI) screens of the application (114) in accordance with test parameters outlined in each of a plurality of test cases or test scenarios. Exemplary user actions include selections of various UI objects in the UI screens, checking and unchecking boxes in the UI screens, and navigating from one UI screen to another UI screen of the application (114). In certain embodiments, certain automated test scripts test functionalities associated with a plurality of objects present in a UI screen of the application (114). However, these automated test scripts may not define how to navigate from one object to another object present in the UI screen.
[0036] To that end, the processing device (110) stores navigation files that define navigation path for navigating from one UI object to another UI object. For example, a particular UI screen includes a set of buttons including ‘home’ button, ‘knowledge’ button, and ‘entertainment’ button. In this example, the script execution system (118) selects a test case and executes a portion of the automated test scripts associated with the selected test case. For example, the selected test case may relate to verifying whether a list of entertainment programs is displayed or not upon selecting ‘entertainment’ button. The script execution system (118) identifies that a next UI object to be selected is ‘entertainment’ button based on the automated test scripts written in a programmatic language.
[0037] The script execution system (118) then refers the navigation files and identifies a navigation path to ‘entertainment’ button. For example, the identified navigation path provides a first navigation path from ‘home’ button to ‘knowledge’ button and a second navigation path from ‘knowledge’ button to ‘entertainment’ button for navigating to the ‘entertainment’ button. The script execution system (118) then executes a portion of the automated test scripts to navigate using the first navigation path and the second navigation path and subsequently selects the ‘entertainment’ button, which causes rendering of a new UI screen with the list of entertainment programs.
[0038] Similarly, it is to be understood that the script execution system (118) is configured to execute other portions of the automated test scripts, which causes the reference device (102) to render a plurality of UI screens associated with the application (114). In certain embodiments, the image-capturing unit (106) is configured to capture images of the UI screens rendered by the reference device (102) following execution of the automated test scripts. The images of the UI screens, thus captured, are used as a set of reference images for testing the one or more test devices (104A-N).
[0039] In one embodiment, the one or more test devices (104A-N) are multimedia devices that are capable of rendering UI screens associated with an application and differ in at least one characteristics when compared to the reference device (102). Examples of such characteristics include device models, screen sizes, and associated operating platforms. In one embodiment, the reference device (102) may be a smart TV having a screen size of 55 inches and running on the Android platform, whereas a test device (104A) may be a smart TV having a screen size of 32 inches and running on the Linux platform. In another embodiment, the test device (104N) may be a mobile phone having a screen size of 7 inches and running on the Windows platform.
[0040] In certain embodiments, testing of the test device (104A) involves having the application (114) preinstalled therein, and ensuring that the test device (104A) is communicatively coupled to the processing device (110). As noted previously, the processing device (110) executes the automated test scripts to navigate through the different UI screens associated with the application (114) being executed in the test device (104A). The image-capturing unit (106) is deployed facing a display screen of the test device (104A) to capture images of the UI screens rendered by the test device (104A) when the application (114) navigates through the associated UI screens. Subsequently, the processing device (110) processes the captured screens to verify a capability of the test device (104A) to properly render the UI screens of the application (114), as described in detail with reference to FIGS. 4A-B.
[0041] In certain embodiments, the processing device (110) is a processor-enabled device configured to receive the images of UI screens captured by the image-capturing unit (106) via the communications network (112). Examples of the communications network (112) include a Wi-Fi network, an Ethernet, a cellular data network, and a short-range communications network such as a Bluetooth network. In one embodiment, the processing device (110) is an application server that resides locally at a center where the reference device (102) and the one or more test devices (104A-N) reside. In another embodiment, the processing device (110) is an application server that resides remotely from the reference device (102) and the one or more test devices (104A-N).
[0042] According to certain aspects of the present disclosure, the processing device (110) further includes the database (116), the script execution system (118), a pattern matching system (120), a contour based navigation system (122), a report generation system (124), and a navigation path updating system (126). In certain embodiments, the various systems (118, 120, 122, 124, and 126) associated with the processing device (110) may be implemented by suitable code on a processor-based system, such as a general-purpose or a special-purpose computer. Accordingly, the processing device (110) and associated systems (118, 120, 122, 124, and 126), for example, include one or more general-purpose processors, specialized processors, graphical processing units, microprocessors, programming logic arrays, field programming gate arrays, cloud-based processing system, cloud computing processor, and/or other suitable computing devices.
[0043] According to certain aspects of the present disclosure, the processing device (110) sets up test sessions for testing the test devices (104A-N). For example, the processing device (110) directs the script execution system (118) to set up a test session for testing a test device (104A) by executing the automated test scripts in accordance with test parameters outlined in specific test cases.
[0044] Execution of the automated test scripts causes automatic navigation through UI screens of the application (114). Meanwhile, the pattern matching system (120) identifies various objects in one or more of the UI screens rendered by the test device (104A). In one embodiment, the pattern matching system (120) is implemented as a supervised machine learning system that learns to identify different types of objects present in reference UI screens and associated locations based on unique patterns associated with the objects. The pattern matching system (120) then identifies objects in the UI screens rendered by the test device (104A) based on previously learnt patterns associated with the reference UI screens. For example, the pattern matching system (120) identifies objects such as icons, logos, menu items, buttons, text fields, list and combo boxes, checkboxes, tables, grids, static text, etc. present in the UI screens rendered by the test device (104A), as described in detail with reference to FIG. 2. The pattern matching system (120) also identifies if all objects in each of the UI screens rendered by the reference device (102), hereinafter referred to as the reference UI screens, are also present in corresponding UI screens rendered by the test device (104A), hereinafter referred to as the test UI screens.
[0045] In one embodiment, the contour based navigation system (122) identifies contours associated with each of the identified objects in the test UI screens. The contour based navigation system (122) also identifies and configures navigation to different target objects in a designated screen in a faster manner. Specifically, the contour based navigation system (122) identifies a target object that is hidden in a designated screen. For example, a display screen size associated with the test device (104A) may be so small that only a first half of a test screen is visible, whereas a second half of the test screen is hidden or not visible. In this example, the contour based navigation system (122) configures the image-capturing unit (106) to capture an image of the first half of the screen. The contour based navigation system (122) identifies contours associated with objects in the first half of the test screen and performs contour analysis and/or optical character recognition (OCR) to identify if a target object to navigate to next is present in the first half of the test screen, the contour based navigation system (122) moves on to the next target object without needing to scroll down to the second half, thus reducing the effort and time needed for processing unrelated contours in the second half.
[0046] However, if the contour analysis shows that the target object is absent from the first half, the contour based navigation system (122) scrolls the test screen to configure the test device (104A) to render the second half of the test screen. The contour based navigation system (122) configures the image-capturing unit (106) to capture an image of the second half of the screen and performs OCR or contour analysis on the captured screen to identify if the target object to navigate to next is present in the second half of the test screen. The contour based navigation system (122) arranges contours and associated objects in the second half of the test screen in a designated order to obtain a list of arranged contours upon identifying that the target object (306J) is present within one of the objects in the second half of the test screen. The contour based navigation system (122) identifies a position of the target object in the list of arranged contours and automatically navigates to the target object based on the identified position. As previously noted, the contour based navigation system (122) may not scroll the test screen if the target object is identified to be present in the first half of the test screen itself. Whereas, existing approaches continuously scroll the test screen even if the target object is present in the first half of the test screen, which delays identification of the target object as continuous scrolling needs to wait for screen refresh times.
[0047] In certain embodiment, the contour based navigation system (122) identifies differences between the reference and test UI screens. For example, the contour based navigation system (122) identifies objects that are not at same locations when corresponding locations in the reference and test UI screens are compared. For instance, a search icon may be present at a top right portion of a reference UI screen. However, in the test UI screen, the search icon may be present at a bottom right portion. In this example, the contour based navigation system (122) identifies that the search icon is present at a different location in the test UI screen when compared to a corresponding location in the reference UI screen.
[0048] Further, the contour based navigation system (122) identifies if one or more objects present in the reference UI screen are absent from the test UI screen, or vice versa. For example, the search icon may be present in the reference UI screen. However, in a particular scenario, the search icon may not be present in the test UI screen due to certain software bugs associated with the application (114) or incompatibility between the test device (104A) and a particular version of the application (114). In this example, the contour based navigation identifies the search icon to be absent from the test UI screen.
[0049] Similarly, in another example, the search icon may not be present in the reference UI screen. However, the search icon may be present in the test UI screen as the search icon may be a newly added feature in the UI screen post generation of the reference UI screen. In this example, a difference identified by the contour based navigation system (122) includes the presence of search icon in the test UI screen that is absent from the reference UI screen. In one embodiment, the one or more such identified differences between the reference UI screen and the test UI screen are used for generating a test execution report.
[0050] More specifically, the report generation system (124) generates the test execution report based on a UI analysis performed by the pattern matching system (120) and/or the contour based navigation system (122). The test execution report includes one or more identified differences between reference UI screens and test UI screens. For example, the test execution report includes one or more objects that are disposed at different locations when corresponding locations in reference UI screens and corresponding locations in test UI screens are compared. The test execution report also includes objects that are present in test UI screens but are absent from reference UI screens, and objects that are absent from test UI screens but are present in reference UI screens.
[0051] The report generation system (124) then automatically reports the identified differences to a defect management system (130), for example, one or more using application programming interface (API) calls. An example of the defect management system (130) is the JIRA tool. The defect management system (130) then provides user selectable options to indicate whether the identified differences are to be considered as defects or as valid changes. For example, one of the identified differences between a reference UI screen and a test UI screen may include the presence of a search icon that is present in the test UI screen but is absent from the reference UI screen. In this example, the defect management system (130) provides user selectable options to indicate whether the identified difference is to be considered as a defect or as a planned/valid change. The defect management system (130) may prompt a user to take necessary actions to fix a possible issue in the application (114) if the identified difference is indicated as a defect.
[0052] Alternatively, the defect management system (130) is configured to trigger the reference device (102) to render an updated version of the reference UI screen when the identified difference is indicated as a planned/valid change. Meanwhile, the image-capturing unit (106) captures the rendered image of the updated version of the reference UI screen. Subsequently, the defect management system (130) configures the processing device (110) to use the captured image as the reference screen for subsequent test executions.
[0053] Specifically, the defect management system (130) configures the continuous integration server (108) to transmit an updated application (128) to the reference device (102) for triggering the reference device (102) to render an updated UI screen. The reference device (102) receives and executes the updated application (128). Subsequently, the test automation system (100) executes a portion of the automated test scripts to enable the reference device (102) to render the updated UI screen having the search icon, which is captured using the image-capturing unit (106). Further, the test automation system (100) uses the captured image including the updated UI screen as a reference UI screen for subsequent test executions.
[0054] In certain embodiments, execution of the automated test scripts developed previously may not simulate a selection of a newly added object in a test UI screen as the automated test scripts define only a set of existing UI objects to be selected one after another. Hence, in one embodiment, the test automation system (100) is configured to simulate the selection of the newly added object without needing to update the automated test scripts by generating a navigation path to the newly added object.
[0055] For example, when a search icon is newly added to a UI screen of the application (114), the image-capturing unit (106) is configured to capture an image of the updated UI screen having the search icon for use as a reference UI screen for subsequent test executions. In certain embodiments, the search icon that is newly added to the reference UI screen is identified for generating an associated navigation path. Specifically, the pattern matching system (120) identifies the search icon newly added in the reference UI screen based on a pattern associated with the search icon. In addition, the contour based navigation system (122) identifies a contour associated with the search icon. The navigation path updating system (126) then generates a navigation path to the search icon based on the contour of the search icon and one or more contours of UI objects disposed in the vicinity of the search icon. For example, in one implementation, the search icon may be disposed below a search box in the reference UI screen. In this example, the navigation path updating system (126) generates a unique navigation path from a contour associated with the search box to a contour associated with the search icon in the reference UI screen.
[0056] The navigation path updating system (126) then updates the navigation files stored in the database (116) based on the generated navigation path for automatically selecting and testing the newly added object in subsequent test executions. In certain embodiments, the test automation system (100) performs subsequent test executions using an updated set of reference images generated using the reference device (102). An exemplary method for generating a set of reference images for use by the test automation system (100) is depicted and described in detail with reference to FIG. 2.
[0057] FIG. 2 is a flow diagram illustrating an exemplary method (200) for generating a set of reference images using the reference device (102) by the test automation system (100) of FIG. 1, in accordance with aspects of the present disclosure. At step (202), the application (114) is executed in the reference device (102) for generating a set of reference UI screens that can be used for testing UI screen rendering capabilities of one or more test devices (104A-N).
[0058] At step (204), the processing device (110) executes one or more automated test scripts stored in the associated database (116). In one embodiment, the automated test scripts provide commands to simulate user actions on UI screens of the application (114) in accordance with test parameters outlined in each of a plurality of test cases or test scenarios. For example, a test case may relate to verifying whether a hidden text box appears in a UI screen upon selecting an expand button in the UI screen. In this example, the automated test scripts provide commands to simulate a user action, which is selection of the expand button in the UI screen. In another example, execution of the automated test scripts may provide commands to simulate other user actions such as selections of various UI objects in the UI screen, checking and unchecking boxes in the UI screen, and navigating from one UI screen to another UI screen of the application (114).
[0059] At step (206), the image-capturing unit (106) records one or more videos including UI screens rendered by the reference device (102) upon executing the automated test scripts. In one embodiment, the processing device (110) processes the one or more recorded videos using one or more image processing algorithms to generate images of UI screens rendered by the reference device (102). The generated images are used as reference images for testing the one or more test device (104A-N).
[0060] At step (208), the processing device (110) is configured to learn types and locations of objects in images corresponding to each of the rendered UI screens. More specifically, in certain embodiments, the pattern matching system (120) of the processing device (110) may be implemented as a supervised machine learning system that learns types and locations of objects in the UI screens based on unique patterns and labels associated with the objects. For instance, FIG. 3 is an exemplary view of a UI screen (300) rendered by the reference device (102) upon executing at least a portion of the automated test scripts. The UI screen (300) includes different types of objects, for example, a search icon (302), a rectangular box (304), a set of menu items (306A-L) within the rectangular box (304), a table (308), and certain text elements (310) within the set of menu items (306A-L) and within the table (308).
[0061] In certain embodiments, the pattern matching system (120) receives labels associated with the objects (302, 304, 306A-L, 308, and 310) from a user and adds labels to the corresponding objects (302, 304, 306A-L, 308, and 310) in the UI screen (300). The pattern matching system (120) then learns unique patterns associated with the objects (302, 304, 306A-L, 308, and 310), and correlates objects’ unique patterns to objects’ labels. For instance, the pattern matching system (120) learns a unique pattern, for example, a contour associated with the search icon (302) and correlates the learnt contour with a label associated with the search icon (302), such that, the pattern matching system (120) identifies any objects whose shapes resemble the learnt contour as the search icon (302).
[0062] Additionally, the pattern matching system (120) learns a corresponding position of the search icon (302) in the UI screen (300) based on the learnt contour of the search icon (302). Similarly, it is to be understood that the pattern matching system (120) learns types and locations of all other objects (304, 306A-L, 308, and 310) in the UI screen (300) based on their unique patterns and user provided information including objects’ labels and labels’ location information.
[0063] Referring back to FIG. 2, at step (210), information learnt, including types and locations of objects in each of the UI screens rendered by the reference device (102), is stored in the database (116). In certain embodiments, the test automation system (100) tests capabilities of the one or more test devices (104A-N) to properly render UI screens of the application (114) using reference UI screens rendered by the reference device (102), and information learnt and stored in the database (116), as described in detail with reference to FIGS. 4A-B.
[0064] FIGS. 4A-B are flow diagrams illustrating an exemplary method (400) for testing the one or more test devices (104A-N) by the test automation system (100) of FIG. 1, in accordance with aspects of the present disclosure. As noted previously, the present test automation system (100) is capable of testing different types of test devices (104A-N) using the same set of reference images generated using the reference device (102).
[0065] In one embodiment, the test devices (104A-B) differ from the reference device (102) in one or more characteristics. Examples of such characteristics include devices’ models, screen sizes, and associated operating platforms. An example of the test device (104A) includes a smart TV running on the Android platform and having a screen size of 30 inches. An example of the test device (104B) includes a mobile phone running on the Windows platform and having a screen size of 6 inches. Similar to the steps (202, 204, 206, and 208) described previously with reference to FIG. 2, the application (114) to be tested is executed in the test devices (104A-B), as depicted in step (402). At step (404), the processing device (110) executes one or more automated test scripts stored in the associated database (116) for testing a desired functionality provided by the application (114). In one implementation, execution of the automated test scripts causes navigation through different UI screens associated with the application (114) in the test devices (104A-B).
[0066] At step (406), the image-capturing unit (106) records one or more videos including UI screens rendered by the test devices (104A-B) when the automated test scripts are executed. At step (408), the one or more recorded videos including the rendered UI screens are transmitted to the processing device (110) via the communications network (112).
[0067] At step (410), the pattern matching system (120) in the processing device (110) identifies different types of objects present in the UI screens rendered by the test devices (104A-B). The pattern matching system (120) identifies the different types of objects present in the test UI screens based on objects’ unique patterns and labels that were learnt previously and stored in the database (116).
[0068] For example, FIG. 5 is an exemplary view illustrating a UI screen (500) rendered by the test device (104A) upon executing at least a portion of the automated test scripts. In one embodiment, the UI screen (300) depicted in FIG. 3 and the UI screen (500) depicted in FIG. 5 are associated with the application (114) and are rendered by the reference device (102) and the test device (104A), respectively upon executing the same portion of the automated test scripts. When testing multiple test devices (104A) and (104B) having different characteristics such as display size, execution of the automated test scripts causes the test devices (104B) to render another test UI screen (not shown) that is substantially similar to the UI screen (500). According to aspects of the present disclosure, the test automation system (100) is configured to use the same reference UI screen (300) for evaluating different objects in both the test UI screen (500) and the test UI screen rendered by test device (104B). It may be noted that, for simplicity, the following sections only describe the evaluation of the test UI screen (500). However, the test automation system (100) is configured to similarly evaluate each of the objects present in the test UI screen rendered by test device (104B) and any other test device using the same reference UI screen (300).
[0069] In order to evaluate the objects, the pattern matching system (120) first identifies the different types of objects (302, 304, 306A-K, 308, 310, and 501) present in the UI screen (500) based on objects’ unique patterns and objects’ labels learnt previously using the UI screen (300). For example, the pattern matching system (120) identifies the presence of the search icon (302) in the UI screen (500) based on an associated contour and an associated label learnt previously using the UI screen (300). Similarly, it is to be understood that the pattern matching system (120) identifies all other objects present in the UI screen (500) based on objects’ patterns and labels learnt previously using the UI screen (300).
[0070] At step (412), the pattern matching system (120) determines if every single object in each of the UI screens rendered by the reference device (102) is also present in a corresponding UI screen rendered by the test devices (104A-B). For example, the pattern matching system (120) identifies if every single object in the UI screen (300) is also present in the UI screen (500).
[0071] At step (414), the contour based navigation system (122) in the processing device (110) identifies contours associated with objects in the UI screens rendered by the test devices (104A-B). For example, the contour based navigation system (122) identifies contours (502, 504, 506A-L, and 508), depicted using dotted lines in FIG. 5, associated with the objects in the UI screen (500). In certain embodiments, the contour based navigation system (122) identifies contours by analyzing colors and gradients associated with one or more of objects in a foreground region of the UI screen (500), objects in a middle ground region of the UI screen (500), and objects in a background region of the UI screen (500).
[0072] At step (416), the contour based navigation system (122) identifies differences between UI screens rendered by the reference device (102) and UI screens rendered by the test devices (104A-B). For instance, the contour based navigation system (122) identifies an object in the test UI screen (500) disposed at a different location when compared to a corresponding location in the reference UI screen (300). For example, the contour based navigation system (122) identifies the table (308) in the test UI screen (500) based on an associated unique contour learnt previously using the reference UI screen (300). The contour based navigation system (122) also identifies if a location of the table (308) in the test UI screen (500) is same as a location of the table (308) in the reference UI screen (300). In one embodiment, the contour based navigation system (122) identifies if the table (308) is at the same location by retrieving location information of the table (308) learnt previously using the UI screen (300) and stored in the database (116), and by checking whether the table (308) is at the retrieved location in the UI screen (500) or not.
[0073] It may be noted that existing test automation solutions that use pixel-wise image comparison method fail to identify the table (308) in the UI screen (500) as such existing solutions only compare pixels at the location (312) in the UI screen (300) with pixels at same location (312) in the UI screen (500). During development phase of the application (114), locations of objects in UI screens may frequently change based on changing requirements. The contour based navigation system (122) of the present disclosure is capable of identifying such objects that are disposed at different locations when compared to corresponding locations in a reference image, whereas existing solutions fail to identify such offset objects.
[0074] In one example, the contour based navigation system (122) identifies that the table (308) is disposed at a different location when compared a corresponding position in the UI screen (300). Specifically, the table (308) may be present at a bottom portion in a test UI screen rendered by a test device (104B), whereas the table (308) may be present at a location (312) in the reference UI screen (300). In this example, the contour based navigation system (122) is still able to identify the presence of the table (308) in the test UI screen as the contour based navigation system (122) identifies UI objects based on their previously learnt contour information.
[0075] The contour based navigation system (122) also identifies a newly added object that is present in the test UI screen (500) but is absent from the reference UI screen (300). In one embodiment, the contour based navigation system (122) performs optical character recognition (OCR) to identify newly added text. In another embodiment, the contour based navigation system (122) performs contour analysis on all objects present in the UI screen (500) to identify other types or additional counts of objects such as a newly added logo, a newly added symbol, etc. For instance, the contour based navigation system (122) performs OCR, identifies text within each of the contours (506A-L), and further identifies the menu item (501) that is newly added in the UI screen (500). Similarly, the contour based navigation system (122) identifies removal of an object such as an object that is present in the reference UI screen (300) but is absent from the test UI screen (500). To that end, the contour based navigation system (122) performs OCR, identifies text within each of the contours (506A-L), and further identifies the menu item (306L) that is absent in the UI screen (500).
[0076] The contour based navigation system (122) also identifies another type of newly added object, for example, a logo that is present in a test UI screen (500) but is absent from the reference UI screen (300), or vice versa. For instance, the contour based navigation system (122) performs contour analysis of all objects present in the test UI screen (500). The contour based navigation system (122) then identifies a contour that is additionally present in the test UI screen (500) but is absent from the reference UI screen (300). The contour based navigation system (122) determines the identified contour to be a newly added object in the test UI screen (500) and proceeds to generate a test execution report. The test execution report indicates all newly added objects in the test UI screen and any other differences between the test UI screen (500) and the reference UI screen (300) such as identification of offset objects and objects present in the reference UI screen (300) but absent from the test UI screen (500).
[0077] As noted previously, during development phase of the application (114), UI screens associated with the application (114) may be frequently changed by removing existing objects and/or by adding new objects. Existing test automation solutions that use pixel-wise image comparison method fail to identify the presence of any such newly added object that is not present in a reference image. Further, such existing solutions need new set of reference images and updated test scripts for testing and validating the presence of newly added objects, which leads to a considerable increase in rework effort, test cycle time, and cost. However, the contour based navigation system (122) of the present disclosure identifies the presence of the newly added object without needing a new set of reference images and/or updated test scripts.
[0078] At step (418), the identified differences between UI screens rendered by the reference device (102) and UI screens rendered by the test devices (104A-B) are presented for approval. As noted previously, with reference to FIG. 1, the report generation system (124) generates a test report based on the outputs of the pattern matching system (120) and the contour based navigation system (122). In one embodiment, the generated test report includes pass or fail status of test cases or test scenarios and differences, if any, identified between reference UI screens and test UI screens such as identification of offset objects, addition of new objects, and removal of existing objects. The report generation system (124) then automatically reports the identified differences to a defect management system (130) using one or more API calls. Subsequently, the defect management system (130) provides a user with selectable options for approving or rejecting the identified differences.
[0079] At step (420), the test automation system (100) triggers the reference device (102) to regenerate one or more reference images and to update navigation files that are stored in the database (116) upon approving the identified differences between the reference UI screen (300) and a corresponding test UI screen. For example, one of the identified differences between the reference UI screen (300) and the test UI screen (500) includes the menu item (501) that is present in the test UI screen (500) but is absent from the reference UI screen (300). This may be due to an update of the application (114) by addition of the menu item (501) in the UI screen post generation of a corresponding reference image. In this example, the contour based navigation system (122) identifies the presence of the menu item (501) in the test UI screen (500) as a defect, as the menu item (501) is present only in the test UI screen (500) but is absent from the reference UI screen (300). Subsequently, the defect management system (130) provides the user with one or more selectable options for indicating if the menu item (501) is to be considered a defect or a valid/planned change.
[0080] The defect management system (130) then triggers the test automation system (100) to capture an updated reference image of the updated UI screen (300) having the menu item (501) when a user selected option indicates that the menu item (501) is to be considered a valid/planned change. In absence of the updated reference image, the test automation system (100) would continue to identify the presence of menu item (501) in the test UI screen (500) as a defect. Therefore, the test automation system (100) is configured to capture the image of the updated UI screen (300) by configuring the continuous integration server (108) to transmit the updated application (128) to the reference device (102).
[0081] The reference device (102) receives the updated application (128) from the continuous integration server (108) and executes the updated application (128). Subsequently, the script execution system (118) executes a portion of the automated test scripts to enable the reference device (102) to render the updated UI screen (300) having the menu item (501), which is captured using the image-capturing unit (106). The processing device (110) then uses the captured image including the updated UI screen (300) as the reference UI screen for subsequent test executions.
[0082] In addition, the navigation path updating system (126) generates a navigation path for aiding automatic navigation to the menu item (501) in subsequent test executions without needing to update the automated test scripts stored in the database (116). To that end, the pattern matching system (120) identifies the newly added object, for example, the menu item (501) in the updated UI screen (300). Subsequently, the contour based navigation system (122) identifies a contour (506L) associated with the menu item (501). The contour based navigation system (122) also identifies that the menu item (501) is present below the menu item (306K), for example, by performing OCR to identify text within each of the contours (506A-L).
[0083] Subsequently, the navigation path updating system (126) generates a unique navigation path from a contour (506K) associated with the menu item (306K) to a contour associated with the menu item (501). The navigation path updating system (126) also updates the navigation files stored in the database (116) by appending the generated navigation path to the navigation files and/or modifying a corresponding previous navigation path. Updating the navigation files enables the test automation system (100) to automatically navigate to the new menu item (501) in subsequent test executions, select the menu item (501) automatically, and verify if the selection of the menu item (501) provides an expected functionality.
[0084] In certain embodiments, the contour based navigation system (122) of the present disclosure is configured to identify and navigate to different target objects in a designated screen in a faster manner. An exemplary approach associated with faster identification of a target object, for example, a menu item (306J) in the designated screen (500) and navigation to the next object is described in detail in the following sections.
[0085] The contour based navigation system (122) identifies the target object, which is the menu item (306J), to navigate to next based on the sequence of test scripts’ execution outlined in a designated test case. In one embodiment, a display screen size associated with the test device (104A) may be so small that only a first set of objects (302, 304, 306A-F, and 308) in the UI screen (500) is visible, whereas a second set of objects (306G-K and 501) is hidden in the UI screen (500). Portions of the UI screen (500) in which the first set of objects (302, 304, 306A-F, and 308) and the second set of objects (306G-K and 501) may exist are referred to as a perceptible region (512) and an imperceptible region (514), respectively, that are depicted on either sides of a line (516) depicted in FIG. 5.
[0086] The contour based navigation system (122) compares text associated with the menu item (306J) with text, if any, within contours (502, 504, 506A-F, and 508) associated with the first set of objects (302, 304 306A-F, and 308). The contour based navigation system (122) further identifies if the text associated with the menu item (306J) matches with text associated with at least one contour selected from the first set of contours (502, 504, 506A-F, and 508). In the example depicted in FIG. 5, the contour based navigation system (122) identifies that the menu item (306J) is not present in the first set of objects (302, 304 306A-F, and 308) when the text associated with the menu item (306J) does not match with text within the contours (506A-F, and 508) present in the perceptible region (512).
[0087] Subsequently, the contour based navigation system (122) clears the first set of objects (302, 304 306A-F, and 308) from an associated memory of the processing device (110) without validating any functions associated with these objects. Further, the contour based navigation system (122) causes the cursor to scroll down the UI screen (500) when the menu item (306J) is identified to be absent from first set of objects (302, 304 306A-F, and 308). Scrolling the UI screen (500) upwards causes the contours (506G-L) along with associated second set of objects (306G-K and 501) to move from the imperceptible region (514) to the perceptible region (512). Subsequently, the contour based navigation system (122) identifies if the menu item (306J) is present in the second set of objects (306G-K and 501) by comparing text associated with the menu item (306J) with text within the contours (506G-L) associated with the second set of objects (306G-K and 501). In one embodiment, the contour based navigation system (122) identifies the menu item (306J) to be present in the second set of objects (306G-K and 501) based on the comparison.
[0088] The contour based navigation system (122) arranges the second set of contours (506G-L) including associated objects in a designated order to obtain a list of arranged contours upon detecting the presence of the menu item (306J) in the second set of objects (306G-K and 501). The contour based navigation system (122) then identifies a position of the menu item (306J) in the list of arranged contours. For example, the contour based navigation system (122) identifies that the menu item (306J) is located at third position from the menu item (306G). The contour based navigation system (122) finally navigates to the menu item (306J) by performing a first navigation from the menu item (306G) to the menu item (306H), a second navigation from the menu item (306H) to the menu item (306I), and a third navigation from the menu item (306I) to the menu item (306J).
[0089] The contour based navigation system (122) of the present disclosure identifies a hidden target object in a designated UI screen to navigate to next in a faster manner when compared to existing approaches that use time consuming continuous scrolling and pixel-wise approaches. Additionally, such existing approaches perform OCR analysis to identify target text on a screen. Continuous scrolling entails extended screen refresh times, and hence, performing OCR on hidden portions of the screen and identifying the target text in the screen may be delayed. Further, such existing approaches scroll down even if a target object is present in a perceptible region of the screen. However, the contour based navigation system (122) of the present disclosure does not scroll up or down the screen when the target object is identified to be present in the perceptible region, which avoids waiting for screen refresh times. Hence, the contour based navigation system (122) identifies the target object in a faster manner.
[0090] Further, unlike existing approaches, the contour based navigation system (122) would not continuously scroll up or down the screen to identify the target object. The contour based navigation system (122) identifies if a target object is present among a first set of objects in a perceptible region (512) of a screen by performing a contour analysis and/or an OCR analysis. The contour based navigation system (122) then scrolls up the screen only when the target object is determined to not be present among the first set of objects. Hence, the contour based navigation system (122) does not need to continuously scroll down the screen, thus preventing delays in identifying the target object in the screen.
[0091] Further, as noted previously, the test automation system (100) of the present disclosure tests different types of devices (104A-N) using the same set of reference images generated using the reference device (102) though the test devices (104A-N) and the reference device (102) may differ in their characteristics including models, screen sizes, and operating platforms.
[0092] Further, the test automation system (100) does not need different test scripts to be developed for testing different types of devices as the test automation system (100) uses the pattern matching system (120) and the contour based navigation system (122) for identifying objects existing in the UI screens in lieu of using a pixel-based image comparison method. Moreover, the test automation system (100) is capable of identifying objects even when associated locations in the reference and test UI screens are different. Additionally, the test automation system (100) also identifies newly added objects in the test UI screen (500) that are not present in the reference UI screen (300) without increasing test scripts rework effort, test cycle time, and cost.
[0093] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics may be combined and/or used interchangeably in any suitable manner in the various embodiments shown in the different figures.
[0094] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the claimed invention.
| # | Name | Date |
|---|---|---|
| 1 | 201941011262-IntimationOfGrant03-04-2023.pdf | 2023-04-03 |
| 1 | 201941011262-STATEMENT OF UNDERTAKING (FORM 3) [22-03-2019(online)].pdf | 2019-03-22 |
| 2 | 201941011262-REQUEST FOR EXAMINATION (FORM-18) [22-03-2019(online)].pdf | 2019-03-22 |
| 2 | 201941011262-PatentCertificate03-04-2023.pdf | 2023-04-03 |
| 3 | 201941011262-POWER OF AUTHORITY [22-03-2019(online)].pdf | 2019-03-22 |
| 3 | 201941011262-FER.pdf | 2021-10-17 |
| 4 | 201941011262-FORM 18 [22-03-2019(online)].pdf | 2019-03-22 |
| 4 | 201941011262-CLAIMS [07-09-2021(online)].pdf | 2021-09-07 |
| 5 | 201941011262-FORM 1 [22-03-2019(online)].pdf | 2019-03-22 |
| 5 | 201941011262-FER_SER_REPLY [07-09-2021(online)].pdf | 2021-09-07 |
| 6 | 201941011262-FORM 3 [07-09-2021(online)].pdf | 2021-09-07 |
| 7 | Correspondence by Agent_Form26_28-03-2019.pdf | 2019-03-28 |
| 7 | 201941011262-DRAWINGS [22-03-2019(online)].pdf | 2019-03-22 |
| 8 | 201941011262-DECLARATION OF INVENTORSHIP (FORM 5) [22-03-2019(online)].pdf | 2019-03-22 |
| 8 | 201941011262-COMPLETE SPECIFICATION [22-03-2019(online)].pdf | 2019-03-22 |
| 9 | 201941011262-DECLARATION OF INVENTORSHIP (FORM 5) [22-03-2019(online)].pdf | 2019-03-22 |
| 9 | 201941011262-COMPLETE SPECIFICATION [22-03-2019(online)].pdf | 2019-03-22 |
| 10 | 201941011262-DRAWINGS [22-03-2019(online)].pdf | 2019-03-22 |
| 10 | Correspondence by Agent_Form26_28-03-2019.pdf | 2019-03-28 |
| 11 | 201941011262-FORM 3 [07-09-2021(online)].pdf | 2021-09-07 |
| 12 | 201941011262-FORM 1 [22-03-2019(online)].pdf | 2019-03-22 |
| 12 | 201941011262-FER_SER_REPLY [07-09-2021(online)].pdf | 2021-09-07 |
| 13 | 201941011262-FORM 18 [22-03-2019(online)].pdf | 2019-03-22 |
| 13 | 201941011262-CLAIMS [07-09-2021(online)].pdf | 2021-09-07 |
| 14 | 201941011262-POWER OF AUTHORITY [22-03-2019(online)].pdf | 2019-03-22 |
| 14 | 201941011262-FER.pdf | 2021-10-17 |
| 15 | 201941011262-REQUEST FOR EXAMINATION (FORM-18) [22-03-2019(online)].pdf | 2019-03-22 |
| 15 | 201941011262-PatentCertificate03-04-2023.pdf | 2023-04-03 |
| 16 | 201941011262-STATEMENT OF UNDERTAKING (FORM 3) [22-03-2019(online)].pdf | 2019-03-22 |
| 16 | 201941011262-IntimationOfGrant03-04-2023.pdf | 2023-04-03 |
| 1 | 2020-10-1311-34-24E_13-10-2020.pdf |