Abstract: A test automation method and system (100, 200) for evaluating device under test (DUT) (102) is presented. Screen transitions database (204) including unique screen definitions, known checkpoints, and partial transition paths from unique to neighboring screens is used. Global exception handling guidance (EHG) database (212) including instructions to be executed upon encountering unknown or unexpected screens is also present. An execution engine (108, 116) performs automated testing on DUT (102) by identifying current screen via iterative comparison of corresponding checkpoints with known checkpoints of unique screens. Additionally, complete transition paths between identified and target screens are recursively determined using partial transition paths. A transition path is selected from complete transition paths to navigate to target screen. Subsequently, test scripts evaluating functionality of current, intermediate, and/or target screens are executed such that the system (100, 200) centrally executes EHG instructions upon encountering an unknown or unexpected screen. FIG. 2
BACKGROUND
Embodiments of the present specification relate generally to test automation, and more particularly to a system and method for improved exception handling in test automation systems.
Success in the marketplace often depends on an ability to constantly customize existing products to match changing demands of customers with shorter product release cycles, while continuing to deliver products of the highest quality. Accordingly, products undergo extensive testing to ensure the end products meet desired quality criteria. Owing to the considerable efforts involved, manual testing is often unfeasible for such extensive testing of products within a short time frame. Accordingly, test automation systems are increasingly used to provide better scalability, repeatability, and accuracy to aid in faster and more cost-effective development and maintenance of various products and/or services.
[0003] By way of example, a video service provider may schedule frequent hardware and/or software updates for a device under test (DUT) such as a set top box (STB) to ensure provision of superior video service to its customers. Accordingly, for each hardware and/or software update, the test automation system may perform a series of automated tests on the STB to not only validate desired behavior of new features, but also to perform regression testing of existing functionality. Typically, such automated tests entail record and feedback functionality that allows a tester to use the test automation system to interactively record user actions and replay the actions repeatedly to compare if actual results match expected outcomes. To that end, the test automation system may employ one or more user-defined checkpoints for automatically validating different operations performed at each screen in the STB. The checkpoints, for example, may include a bitmap signature corresponding to a selected region or an element in an STB screen that may be used to achieve a desired operation. In one conventional implementation, the test automation system may execute test scripts that cause firing of IR signals, capturing resulting audio and/or video output, and comparing the output with expected results by identifying the checkpoints, for example, via use of optical character recognition.
[0004] However, conventional test automation systems depend primarily on script based test automation. Such script based test automation merely mimics a pre-recorded sequence of user input for validation of desired STB operations using pre-defined parameters for each STB screen. Accordingly, robust and reliable testing of the STB requires a test automation engineer to foresee and address all expected events, corresponding responses, and all possible screen transitions from and to each of the individual STB screens via suitable test scripts.
[0005] Capturing such information via suitable test scripts for each individual STB screen, however, is a cumbersome and memory-intensive task. Moreover, even a simple re-labeling of a button or repositioning of a screen element, for example as part of a software and/or hardware update, may render these conventional test scripts invalid. Additionally, the conventional test scripts often fail to appropriately address unexpected events or screen transitions during test execution. Particularly, in absence of a pre-defined or previously recorded user input or screen transition, the conventional test scripts fail to adequately handle an error message, an unexpected notification window, and/or an unexpected environment condition. Specifically, conventional test automation systems fail to either continue test execution suitably or gracefully quit the automated STB test sequence without wasting captured test efforts. Use of conventional test automation systems, thus, poses major reliability and maintainability issues for development and testing of products that undergo frequent updates.
SUMMARY
[0006] In accordance with certain aspects, a method for improved exception handling in a test automation system configured to evaluate a device under test is presented. The method includes generating a screen transitions database comprising definition of one or more unique screens in the device under test, one or more known checkpoints that identify each of the unique screens, and one or more partial transition paths from each of the unique screens to all immediately neighboring screens. Additionally, the method includes generating a global exception handling guidance database including one or more instructions to be executed when the test automation system encounters an unknown screen or one of the unique screens unexpectedly during execution of a test script on the device under test. Further, the method includes performing automated testing on the device under test, wherein the automated testing includes identifying a current screen by iteratively comparing checkpoints in the current screen with the known checkpoints in one or more of the unique screens. The automated testing also includes recursively determining one or more complete transition paths between the identified screen and a target screen defined in the test scripts using the partial transition paths stored in the screen transitions database. Moreover, the automated testing includes selecting a desired transition path from the complete transition paths based on user preference and/or a predefined configuration, followed by navigating to the target screen via the selected transition path. Subsequently, one or more test scripts are executed for evaluating a desired functionality related to one or more of the current screen, one or more intermediate screens, and the target screen such that the test automation system centrally executes the specific instructions in the global exception handling guidance database upon encountering an unknown screen or one of the unique screens unexpectedly during execution of the test scripts.
[0007] In accordance with certain aspects of the present specification, a graphical user interface (GUI)-based test automation system configured to evaluate a device under test is presented. The system includes a screen transitions database including definition of one or more unique screens in the device under test, one or more known checkpoints that identify each of the unique screens, and one or more partial transition paths from each of the unique screens to all immediately neighboring screens. The system further includes a global exception handling guidance database including one or more instructions to be executed when the test automation system encounters an unknown screen or one of the unique screens unexpectedly during execution of a test script on the device under test. Moreover, the system includes an execution engine operatively coupled to o the device under test, the screen transitions database, and/or the global exception handling guidance database. The execution engine is configured to perform automated testing on the device under test by identifying a current screen by iteratively comparing checkpoints in the current screen with the known checkpoints in one or more of the unique screens. The execution engine further recursively determines one or more complete transition paths between the identified screen and a target screen defined in the test scripts using the partial transition paths stored in the screen transitions database. Additionally, the execution engine selects a desired transition path from the complete transition paths based on user preference and/or a predefined configuration and navigates to the target screen via the selected transition path. Subsequently, one or more test scripts are executed for evaluating a desired functionality related to one or more of the current screen, one or more intermediate screens, and the target screen such that the test automation system centrally executes the specific instructions in the global exception handling guidance database upon encountering an unknown screen or one of the unique screens unexpectedly during execution of the test scripts.
BRIEF DESCRIPTION OF DRAWINGS
[0008] These and other features, aspects, and advantages of the claimed subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0009] FIG. 1 is a schematic representation of a GUI-based test automation system for a set top box;
[0010] FIG. 2 is a diagrammatical representation of an exemplary test sequence including improved exception handling achieved using the system of FIG. 1;
[0011] FIG. 3 is a diagrammatical representation of an exemplary Exception Handling Guidance (EHG) knowledge recorded for use by the system of FIG. 1 for providing improved exception handling;
[0012] FIG. 4 is a flow diagram depicting an exemplary method for identifying current screen during execution of the test cases;
[0013] FIG. 5 is a schematic representation of various screens or states of an application and corresponding screen transitions provided to the system of FIG. 1 for improved exception handling while executing the test cases;
[0014] FIG. 6 is a diagrammatical representation of a sample exception reporting screen encountered during the test sequence shown in FIG. 5;
[0015] FIG. 7 is a diagrammatical representation of a sample message notification screen encountered during the test sequence shown in FIG. 5;
[0016] FIG. 8 is a flow diagram depicting an exemplary method for moving from the current screen to a target screen during execution of the test cases; and
[0017] FIG. 9 is a flow diagram depicting an exemplary method for generating a list of potential screen transition paths that may be used for moving from the current screen to a target screen during execution of the test cases.
DETAILED DESCRIPTION
[0018] The following description presents exemplary systems and methods for improved exception handling in test automation systems. Particularly, embodiments described herein disclose exemplary methods that aid in uniquely identifying unexpected events and/or screens during test automation to either continue execution of the test scripts and/or to gracefully quit the automated test sequence. An exemplary framework that is suitable for practicing various implementations of the present system is discussed in the following sections with reference to FIG. 1.
[0019] FIG. 1 illustrates an exemplary system 100 for providing improved exception handling during automated test execution. For clarity, the present embodiment is described with reference to automated testing of an STB 102. However, it may be noted that embodiments of the system 100 may be used for improved exception handling in other types of automated testing scenarios, such as, while testing a cellular phone, a table computer, and/or an appliance connected to a communications network.
[0020] Generally, the STB 102 is configured to decode signals received from a service provider, for example, via cable or satellite and display a resulting audio and/or video (AV) feed on a display device such as a television screen. In certain embodiments, the STB 102 includes an electronic program guide (EPG) to allow a user to interactively request for desired AV feeds and/or other content from a streaming server 104. Particularly, the user may request for the desired AV feed and/or other STB functionality such as games or internet connectivity via an infrared (IR) or radiofrequency (RF) based control unit 106. The service provider, however, may frequently update the EPG and/or other software and/or hardware characteristics of the STB 102 to provide enhanced viewing experience and additional functionality.
[0021] Each of these frequent updates is typically preceded by extensive testing of the updated and/or existing STB features to ensure minimal disruption of STB services. Accordingly, in one embodiment, the system 100 includes a test management server (TMS) 108 configured to control testing and validation of one or more desired STB operations. Particularly, in one embodiment, the TMS 108 is operatively coupled to a test automation system (TAS) 110 that is configured to implement automated STB testing protocols using one or more test scripts stored in a database server 112. Although, FIG. 1 depicts the TMS 108 and the TAS 110 as two separate devices, in certain embodiments, the system 100 may include a single device configured to provide functionality of both the TMS 108 and the TAS 110. Alternatively, more than two devices may be used to provide the functionality of both the TMS 108 and the TAS 110.
[0022] In one embodiment, the TAS 110 may be configured to execute suitable test scripts that implement a desired sequence of operations under control of the TMS 108. The sequence of operations defined in the test scripts, for example, may include navigating to a target EPG screen, and/or selecting a specific menu item in an identified EPG screen. In certain embodiments, the test scripts may also include commands to reconfigure operational parameters of the STB 102, capture AV feed at desired operational characteristics, and/or switch the STB 102 between active and/or inactive modes via a power control module 114.
[0023] Accordingly, in one embodiment, the TAS 110 includes an execution engine 116 configured to execute selected test scripts retrieved from the database server 112, for example, over one or more wired and/or wireless communications links. Particularly, in one embodiment, the execution engine 116 selects and executes test scripts that configure the control unit 106 to provide actual or simulated IR and/or RF control signals 118 for testing STB operations that involve user input typically received, for example, via a remote control unit. The STB 102, in turn, may request a desired video feed from the streaming server 104 in response to the received control signals 118. In certain embodiments, the TAS 110 is communicatively coupled to the streaming server 104 that may provide the STB 102 with serial data over a wired and/or wireless communications network 120 such as a wireless local area network or the Internet for automated testing. The TAS 110, thus, may allow for testing of the request and response operations of the STB, while testing quality of the AV feed received from the streaming server 104.
[0024] Particularly, in certain embodiments, the system 100 may include one or more web clients 122 that provide test automation engineers with access to the TAS 110 over the communications network 120. Specifically, the web clients 122 allow the test engineers to use the TAS 110 to generate and/or configure different test suites for testing one or more desired STB functions. In certain embodiments, the TAS 110 includes a user management module 124 that defines user access parameters such as security configurations and/or custom test configurations corresponding to different users. The user management module 124, thus, may aid in ensuring that designated test engineers are allowed to configure automated tests for only those modules for which they have valid authentication and/or authorization. Additionally, the user management module 124 may also allow the TAS 110 to pre-load custom or user-defined configurations for faster testing.
[0025] In one implementation, for example, the TAS 110 may allow a test automation engineer to record actions of one or more users on the STB 102 as test scripts. These test scripts may be organized in test suites to test for specific STB functions or performance parameters. Subsequently, the test suits may be executed to replay or emulate the user actions while using the video service, thus allowing for automated testing of STB functionality and performance. Further, the TAS 110 may also allow recording of one or more checkpoints such as labels associated with screen elements to uniquely identify each of the STB screens, the checkpoints being used to invoke and execute accurate test suites for each STB screen. Additionally, the TAS 110 may also allow storage of a priori information including expected user actions corresponding to each screen and/or potential transitions between different screens in a synchronous and asynchronous System States and Transitions (SSnT) database. The SSnT database and its use in the automated testing will be described in greater detail with reference to FIG. 2.
[0026] Moreover, in certain embodiments, the TAS 110 interfaces with other modules associated with the STB 102 such as the power control module 114 and/or the control unit 106 to allow the automated tests to verify operations such as switching to a particular channel, selecting an EPG menu item, and/or to switch the STB 102 between on, off, and/or standby modes. In one embodiment, the TAS 110 allows for record and playback of a test sequence for verifying STB operations, for example, available at each STB screen and those that involve navigating from a source to a target screen.
[0027] In conventional test automation systems, the record and playback is often irrevocably hindered by appearance of an unhandled unexpected window or error message. An unexpected window is a window that appears during the playback of an automated test, but was not visible during the recording of the test sequence. For example, during execution of the test script, a modal window that displays a message about an assertion or an error message from the operating system or another application window may overlap at a point in the test execution where a user action such as remote control key press should be simulated. Such a scenario interrupts test script execution, thus leading to failure of test case for an unknown reason and/or causing time-consuming system reboots.
[0028] Certain conventional test automation systems attempt to address these unexpected windows/exceptions by providing error handling mechanisms in individual test scripts of each STB screen. However, handling all types of unexpected windows/exceptions at the level of test scripts leads to inefficient test automation. Particularly, in view of the frequent update to the user interface, software and/or hardware of the STB 102, error handling in individual test scripts needs to be updated frequently, thus requiring considerable rework on part of the test automation engineers. Script-level error handling in conventional automation systems, thus, renders the test automation framework difficult to reuse and/or maintain, in turn leading to long product development and/or maintenance cycles.
[0029] Unlike such conventional test automation systems, the system 100 described herein provides a novel technique that allows for common or centralized handling of exceptional scenarios during automated test execution. Specifically, in one embodiment, the TMS 108 and/or the TAS 110 may be configured to provide common exception handling services to all automated test scripts. Use of common exception handling services eliminates a need to handle unexpected and/or unknown widows at the test script level not only during initial recording of test scripts, but also following software and/or hardware updates, thus saving a considerable amount of time. According to certain aspects of the present specification, the execution engine 116 in the TAS 110 uses the SSnT and/or previously determined exception handling guidance (EHG) information to provide centralized exception handling during test execution. The SSnT and the EHG will be described in greater detail with reference to FIGs. 2-6.
[0030] Further, in certain embodiments, the TAS 110 invokes suitable test suites including a group of test scripts to test different functionalities and/or performance parameters for different STB screens. Further, the TAS 110 identifies success or failure of the individual test scripts, for example, based on image and/or bitmap based comparison. Accordingly, in certain embodiments, the TAS 110 includes an image comparison module 126 that compares output test data such as a received AV feed with corresponding expected values. The expected values, for example, may be stored in the database server 112 through a knowledge acquisition step, such as record and playback, performed prior to testing the STB 102.
[0031] In certain embodiments, the TAS 110 further includes an optical character recognition (OCR) module 128 that aids the image comparison module 126 in comparing the received information against the corresponding expected values. Particularly, in an exemplary implementation, the OCR module 128 may be configured to generate a text string in response to the received AV feed. Subsequently, the OCR module 128 communicates the generated text string to the image comparison module 126 for comparison with an expected text string that may be previously identified and stored in the database server 112. A match between the generated and expected text strings may be used to accurately identify a specific EPG screen, or a selected element of interest within the screen or on the display device. Accurate identification of the screen or screen element allows the TAS 110 to determine the correct test scripts and test sequence to be executed for testing a desired STB operation at current or a subsequent STB screen. Furthermore, use of the SSnT and/or the EHG information allows the STB 102 under test to safely continue or gracefully quit the testing sequence upon encountering an unknown and/or unexpected message or window during test execution.
[0032] Additionally, in certain embodiments, the system 100 includes a test logger 130 configured to record one or more characteristics of an STB operation being tested in a log file. The characteristics, for example, may include input and output of each test module, values of one or more desired parameters of the STB and a time to execute a particular test module, and/or details of known but unexpected windows encountered asynchronously during test execution. The details of different unexpected windows such as error or notification windows and corresponding error handling scripts, in particular, may be stored as part of the SSnT and/or the EHG in the database server 112 to enhance the test automation sequence and/or scripts for subsequent test executions. Additionally, the EHG may also store error-handling routines that suitably address unknown screens that do no match with any of the screen specification given to the system. The error-handling routines may allow the system 100 to either continue or terminate test execution in a desirable manner, while allowing the trace logger 130 to capture useful test output.
[0033] In certain embodiments, the trace logger 130 provides the test output as feedback for automatically enhancing the SSnT and/or the EHG allows for shorter maintenance cycles. Additionally, the trace logger 130 may transmit the log file to a report module 132. In one embodiment, the report module 132 generates and/or displays a report including predefined or user-specified details of one or more STB operations that were tested. The desired details, for example, may include a test identifier, a software revision number, an audio pass-fail identifier, a video pass-fail identifier, an image string pass-fail identifier, an unknown screen. The reported details, in turn, may be used to improve the functionality and performance of the STB 102. Additionally, the reported details may be used to augment the SSnT, the EHG, and/or individual test scripts for improving speed and/or accuracy of the test automation process for future use.
[0034] Further, FIG. 2 depicts a schematic representation 200 of an exemplary process flow for providing improved exception handling using the system of FIG. 1. Conventional test automation products control device under test (DUT) such as the STB 102 of FIG. 1 by firing IR signals, capturing AV output, and comparing with known or expected results. As previously noted, such conventional systems, however, fail to efficiently handle occurrence of unexpected windows/exceptions or messages during execution of an automated test sequence. Certain conventional test automation systems attempt to address occurrence of unexpected windows/exceptions at the level of each individual test case. However, such a solution is overtly complex owing to the large number of asynchronous events that may potentially be triggered by different states of the application, the DUT, the OS, and/or other application software. Handling of the unexpected windows/exceptions is further complicated as there may be no direct mapping relationships between functional elements present in these windows and the generated test scripts. Furthermore, conventional TAS may not have any knowledge about behavior of an overall system or DUT.
[0035] Embodiments of the present system and method, however, provide an effective mechanism to handle unexpected windows/exceptions while automating test execution for interactive testing at a common point. Specifically, the system 100 (see FIG. 1) allows for efficient unexpected windows/exception handling via use of prior knowledge 202 of various screens, screen transitions, and/or exceptional cases. In certain embodiments, the system 100 presents this prior knowledge in a graphical or visual format 203 that is machine understandable to aid in verification of system transitions and rules at a macro level both by a machine and/or a human.
[0036] Particularly, in one embodiment, the system 100 stores the prior knowledge as SSnT 204 in an associated memory device such as the database server 112 of FIG. 1. In an exemplary implementation, the SSnT 204 includes checkpoints 206 to uniquely identify each STB screen, one or more potential user actions 208 or script representing events that may occur at each STB screen and/or lead to known screen transitions 210. In certain embodiments, the SSnT 204 may also include definitions of synchronous and asynchronous screen transitions, unique screens including exception and/or message notification dialogs, hot keys or user actions which would bring the system 100 to various unique states during automated test execution from anywhere irrespective of current state, and/or derived states based on user-defined knowledge. The user-defined knowledge, for example, may be input during initial recording of the test cases by the test automation engineer and/or via use of an editor.
[0037] Further, in certain embodiments, the SSnT 204 may be stored in the database server 112 along with selected details corresponding to the STB 102 under test. An exemplary SSnT 204 for use with the system 100 is depicted in FIG. 3. In one embodiment, the SSnT 204 may initially be in a preliminary or “raw” state. Accordingly, for use in test automation, the test automation engineer may use the “purify” command to validate the SSnT 204 by uniquely identifying the STB screens. Specifically, the STB screens may be identified based on a stored correlation in the SSnT 204 that maps one or more unique checkpoints to each STB screen. In certain scenarios, user action for transition from one screen to another may be composite. In such scenarios, intermediate screens having derived checkpoints may be inserted in SSnT for intermediate screen identification. The derived checkpoints, for example, may include all checkpoints valid for the intermediate screen which is listed either in ‘from screen’ or in the ‘to screen’ during test execution.
[0038] Furthermore, any unknown or undefined screen is appropriately handled via use of a common Exception Handling Guidance (EHG) 212. In one embodiment, the EHG 212 includes one or more exception and/or notification dialogs and corresponding user actions needed to gracefully return from a known exception state or an unknown state to an original or a default state.
[0039] By way of example, FIG. 3 depicts diagrammatical representation of a sample EHG 300, such as the EHG 212 of FIG. 2. Particularly, in one embodiment, a common EHG 300 is defined for a set of the test automation scripts. In case of occurrence an exception ES1, an unexpected notification screen NS1, or an unidentified screen during test case execution, the TAS 110 of FIG. 1 takes control and executes a corresponding script defined in the EHG 300 to handle the situation as shown in FIG. 3. For example, if the TAS 110 identifies the unexpected screen to be a message notification based on a checkpoint-based analysis, the TAS 110 may be configured to execute a script that mimics pressing an escape key, an enter key, or clicking on focused control in the unexpected window.
[0040] Similarly, if the TAS 110 identifies the unexpected screen to be an error notification, the TAS 110 may be configured to execute a script that records details of the error in a test and/or error log. Additionally, the TAS 110 may execute a test script that mimics pressing an escape key or clicking on focused control to gracefully return to the next test script in the original test sequence and/or to a default state. Use of the EHG 300, thus, allows the TAS 110 to provide a common or centralized exception handling that may easily be enhanced and updated, while allowing the test automation engineer to focus solely on the test cases being automated.
[0041] With returning reference to FIG. 2, use of the prior knowledge 202 stored in the EHG 212 and the SSnT 204 of FIG. 2 aids the TAS 110 to understand the DUT entirely and in a cohesive manner. Specifically, the SSnT 204 allows the TAS 110 to identify the current screen, mimic user actions for moving to target screen, and execute the test automation script 214, while EHG 212 allows the TAS 110 to centrally handle abnormal conditions encountered during test execution.
[0042] To that end, in one embodiment, the TAS 110 further includes a scheduler 215 that defines a schedule for execution of suitable test automation scripts 214 that may mimic user actions 216 necessary to test desired STB functionality and/or report desired test results 218. In certain embodiments, the TAS 110 further includes a controller 220, which in operative association with the scheduler 215, the SSnT 204 and the EHG 212, controls execution of the suitable test scripts for testing one or more desired STB functions.
[0043] By way of example, the TAS 110 may employ the SSnT 204 and/or the EHG 212 for efficiently testing an STB function for accessing games available via the STB 102. Specifically, the TAS 110 may execute suitable scripts that that identify a current STB screen being tested. As previously noted, the TAS 110 stores a list of one or more checkpoints that may be used to uniquely identify every known STB screen as part of the SSnT 204. Accordingly, in one embodiment, the TAS 110 may use checkpoints evaluation to allow the test automation scripts 214 to navigate from a currently identified screen to the games menu without needing to first navigate to a default EPG menu screen.
[0044] FIG. 4 depicts a flow chart depicting an exemplary method for accurately identifying a current screen using a checkpoint-based evaluation. In the exemplary method, a suitable data structure, for example, named ScreenList may store all screens to be verified. Accordingly, at step 402, the data structure ScreenList may be initialized. Generally, multiple test cases may be executed within a single screen. Accordingly, a current screen may be the target screen for multiple test cases. An embodiment of the present method, therefore, initializes the ScreenList with the entry for the target screen to avoid unnecessary computing, in turn allowing for performance optimization while executing the multiple test cases.
[0045] Additionally, the method employs two lists of checkpoints; CPVerifiedTrue and CPVerifiedFalse based on a verification status of the checkpoints identified in a current screen. Further, at step 404, both CPVerifiedTrue and CPVerifiedFalse are initialized as empty lists. In one embodiment, the list CPVerifiedTrue is configured to store the list of unique checkpoints that are found true upon verification, that is, found to be present in the current screen being evaluated. Further, the list CPVerifiedFalse stores the list of unique check points which are found false upon verification, that is, found to be absent from the current screen. The values in these lists may be reused while comparing the current screen with other screens in the list during screen identification. Use of the lists CPVerifiedTrue and CPVerifiedFalse, thus, aid in optimizing performance of the present method by checking fewest number of checkpoints to match with a defined screen or to determine if the current screen is not defined.
[0046] Accordingly, at step 406, it may be determined if there are any more items in ScreenList. If it is determined that ScreenList contains no further screen for identification, the screen remains unidentified at step 408. Alternatively, if there are further screens remaining in the ScreenList, at step 410, a variable NextScreen may be assigned the next screen in ScreenList to be verified. As previously discussed, as the ScreenList is initialized with the target screen as the first entry in the first iteration, the NextScreen may correspond to the target screen. In subsequent iterations, other screens assigned to NextScreen may be similarly verified.
[0047] Particularly, at step 412, the NextScreen may be verified by initializing a data structure, for example, named CPToVerifyScreen with all checkpoints known for NextScreen. If any checkpoint item in CPToVerifyScreen is absent from the current screen under evaluation, the absent checkpoints are added to CPVerifiedFalse. Accordingly, at step 414, if any item in item in CPToVerifyScreen matches an item in CPVerifiedFalse, the NextScreen is discarded and the control is returned to 406, where further screens listed in ScreenList are evaluated to verify if the current screen matches any of the listed screens.
[0048] However, if none of the items in CPToVerifyScreen matches any item in CPVerifiedFalse, the method proceeds to step 416. At step 416, all items in CPToVerifyScreen that match CPVerifiedTrue are removed. Further, at step 418, it is verified if CPToVerifyScreen includes any more checkpoint items for verification. If there are no further checkpoints in CPToVerifyScreen, the current screen is identified as the NextScreen at step 420. However, if there are any further checkpoints remaining in CPToVerifyScreen for verification, a variable, for example, named CheckPoint may be assigned the value of next item in CPToVerifyScreen, as depicted in step 422.
[0049] Subsequently, the checkpoints from CPToVerifyScreen assigned to CheckPoint are verified against the checkpoints in the current screen at step 424. If any of the checkpoints from CPToVerifyScreen are absent from the current screen, the corresponding checkpoint is added to CPVerifiedFalse at step 426. Subsequently, the NextScreen is discarded and the control returns to step 406 for evaluation of the current screen against yet another screen listed in the ScreenList. Alternatively, if any checkpoints from CPToVerifyScreen are found to be present in the current screen at step 424, the corresponding checkpoint is added to CPVerifiedTrue at step 426. Subsequently, the control passes to step 418 and the remaining steps are iteratively performed until all checkpoints are verified and the screen is identified at step 420, or until all screens in the ScreenList have been verified against the current screen. As previously noted, CPVerifiedTrue and CPVerifiedFalse store the checkpoints already identified as present and absent from the current screen, thereby aiding in faster comparison between remaining screens and the current screen.
[0050] Additionally, in one embodiment, if the current screen fails to match any of the defined screens listed in the ScreenList, the current screen is identified as an unknown screen and the TAS 110 may be configured to execute a corresponding EHG test script defined for such unknown screens. However, if the current screen is identified as one of the defined screens listed in the ScreenList, the TAS 110 may use the screen transitions stored in the SSnT 204 to identify optimal screen transition path for navigating from the current screen 222 via one or more intermediate screens 224 to a target STB screen 226. In one embodiment, the desired path may correspond to the shortest path possible. However, in another embodiment, the desired path may be a path that traverses through one or more other desired STB screens to test a particular STB functionality. Selection of the path, thus, may depend on pre-programmed criteria and/or user preferences.
[0051] However, rather than storing an exhaustive list of all possible screen transitions for navigating from a current STB screen 222 via one or more intermediate screens 224 to a target STB screen 226, in certain embodiments, the SSnT 204 stores only immediate neighbors such as those screens to which a particular screen can directly navigate. Storing only immediate neighbors allows for greater reusability and/or maintainability of the system 100 as only a small number of direct screen transitions may need to be modified in the event of any proposed or future updates to the STB 102. The TAS 110 of FIG. 1 may simply identify all and/or potentially new screen transition paths from a source screen to a target screen by using recursive generation of possible screen transitions using the partial screen transition information. An exemplary method for generating all potential screen transition paths based on the SSnT 204, and selecting an optimal path for navigating to a desired screen may be described in greater detail with reference to FIGs. 8-9.
[0052] Further, FIG. 5 illustrates a diagrammatic representation 500 of a sequence of recording an exemplary instance of SSnT. In the embodiment depicted in FIG. 5, S1, S2, S3, S4, and S5 are STB screens that may appear during execution of a specific test case. Further, UA1, UA2, UA3, UA4, UA5, and UA6 are indicative of recorded user actions for moving from one screen to another screen as shown in FIG. 5. Additionally, UA6 is indicative of a hot key event that brings the TAS to a default screen S1 from any other STB screen. In addition to the screens S1-S5, the SSnT may include stored definitions for ES1 and NS1 for addressing any unexpected occurrence of these screens during test execution even though they may not appear during the initial recording of the test script.
[0053] FIG. 6 depicts an example of the screen ES1 of FIG. 5 that is displayed when an unexpected exception occurs during the execution of the test script. Further, FIG. 7 depicts an example of a notification screen NS1 of FIG. 5 that is displayed in the event of occurrence of an unexpected or asynchronous event or an exception.
[0054] With returning reference to FIG. 5, for both scenarios depicted in FIGs. 6-7, the test automation engineer may add suitable test scripts to the EHG such as the EHG 212 of FIG. 2 that may be used by the TAS 110 to ensure that the test execution gracefully exits or returns to a desired state without any unnecessary screen transitions.
[0055] Further, FIG. 8 illustrates a flow chart 800 that depicts an exemplary method for determining an optimal screen transition path for navigating from a current screen identified by the method of FIG. 4 to a target screen. Embodiments of the exemplary method may be described in a general context of computer executable instructions on a computing system or a processor. Generally, computer executable instructions may include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types.
[0056] Embodiments of the exemplary method may also be practiced in a distributed computing environment where optimization functions are performed by remote processing devices that are linked through a wired and/or wireless communication network. In the distributed computing environment, the computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0057] Further, in FIG. 8, the exemplary method is illustrated as a collection of blocks in a logical flow chart, which represents operations that may be implemented in hardware, software, or combinations thereof. The various operations are depicted in the blocks to illustrate the functions that are performed, for example, simultaneous UL and DL phase in the exemplary method. In the context of software, the blocks represent computer instructions that, when executed by one or more processing subsystems, perform the recited operations.
[0058] The order in which the exemplary method is described is not intended to be construed as a limitation, and any number of the described blocks may be combined in any order to implement the exemplary method disclosed herein, or an equivalent alternative method. Additionally, certain blocks may be deleted from the exemplary method or augmented by additional blocks with added functionality without departing from the spirit and scope of the subject matter described herein. For discussion purposes, the exemplary method will be described with reference to the elements of FIGs. 1-7.
[0059] In the embodiment of the method depicted in FIG. 8, the TAS 110 (see FIG. 1) identifies the current screen, selects a suitable path to reach target screen and moves to the target screen (TargetScreen) by applying user actions or scripts associated with intermediate transitions in the path. Accordingly, at step 802, a current screen is identified by calling an IdentifyScreen () function. In one embodiment, the screen identification may be performed using an embodiment of the method described herein with reference to FIG. 4. At step 804, it may be determined if an unexpected or unknown screen is encountered during screen identification. If the IdentifyScreen() function encounters an unexpected screen, for example a notification screen or an exception screen, at step 806, the TAS 110 executes appropriate EHG scripts predefined for the specific unexpected screen that is identified. In one embodiment, the screen identification may be performed using the exemplary method described with reference to FIG. 4. Alternatively, if the screen identification fails and an unknown screen is detected, the TAS 110 executes EHG scripts pre-programmed for addressing the unknown screen suitably, for example, by allowing the unknown screen and/or the test case to terminate gracefully.
[0060] If no unexpected or unknown screens are encountered, it may be determined if the identified CurrentScreen is the same as the TargetScreen, as depicted by step 808. Upon determining that the CurrentScreen is the same as the TargetScreen, execution terminates at step 810 as TargetScreen is reached. However, if the CurrentScreen is not the same as the TargetScreen, the method selects a suitable path for transition to the target screen. Generally, a path includes a sequence of transitions including “from screen”, “to screen” and one or more user actions and/or scripts to be executed for the transition. In certain embodiments, the suitable path may correspond to the shortest path between the “from screen” and the “to screen.” In certain other embodiments, however, the suitable path may correspond to a path including certain specific screens or employing some particular user action and/or script.
[0061] Accordingly, to select a suitable path, at step 812, a list of screen transitions from each defined screen to their immediate neighbors may be sorted based on the “from screen” and stored in a data structure, for example, named SortedScreenList. Unlike conventional systems that predefine and store all valid paths, the present method only stores partial path information, namely immediate neighbor of each screen in the SSnT 204 to allow for faster and easier maintenance of the system 100. Moreover, a list, for example, named PathList may be initialized to store all valid paths from the current screen to the target screen. In one embodiment, the PathList, upon initialization, is empty or has no values. Further, a function GeneratePath (“”, FromScreen, TargetScreen, SortedScreenList) may be called to create all valid paths from the current screen to the target screen to be stored in the PathList. The function GeneratePath () will be described in greater detail with reference to FIG. 9.
[0062] FIG. 9 illustrates a flow chart 900 that depicts an exemplary method for determining all possible screen transition paths for navigating from a current screen to a target screen in a recursive manner using the function GeneratePath(). To that end, the present method employs the sortedScreenList that includes known screen transition entries including "from screen," "to screen," and "user action" needed for the transition from the “from screen” to the “to screen.” In one embodiment, the entries in the sortedScreenList are sorted based on the “from screen” data.
[0063] The method begins at step 902, where a check may be performed to determine if there are any more items in sortedScreenList where “from screen” in any of the listed screen transition entries is the same as FromScreen. If SortedScreenList includes additional items, at step 904, it may further be determined if the next path in the transitions stored in the SortedScreenList includes the “to screen” listed in the screen transition entry listed in the sortedScreenList. If the next path includes the “to screen,” the next path is added to the PathList at step 906. However, if the next path does not include the “to screen,” at step 908, the transition (from screen, user action, to screen) is appended to the given Path. As used herein, the path corresponds to an intermediate path obtained from previous calls to GeneratePath() for transitioning to TargetScreen.
[0064] Subsequently, at step 910, it is determined if “to screen” defined in the screen transition entry is the same as a desired TargetScreen. If the “to screen” in a screen transition entry is the same as TargetScreen, the path is added to the PathList at step 906. However, if the “to screen” is different from the TargetScreen, at step 912, the function GeneratePath(Path, to screen, TargetScreen, SortedScreenList) is executed to add all valid paths between “to screen” and TargetScreen to the PathList in a recursive manner. Furthermore, a function GetPath() may be called to select the most desirable path, that is TargetPath, from the PathList based on a predefined configuration and/or user preferences.
[0065] An exemplary implementation of the method of FIG. 9 is disclosed herein with reference to FIG. 5. If the possible connections between the screens are as depicted in FIG. 5, the current screen being S1 and the target screen being S5, three transitions may be possible from S1 to S5, namely S1->S2, S1->S3, and S1->S5.
[0066] For generating all paths from S1->S5, for example, the function GeneratePath() may be employed. Particularly, in an example, the function GeneratePath() is called with Path = null, FromScreen = S1 and TargetScreen = S5. For readability, parameter SortedScreenList is omitted. In one embodiment, the function starts with an empty PathList. Further, SortedScreenList will have three entries with from screen as S1. Thus, three intermediate paths, S1->S2 , S1->S3 ,S1->S4, will be generated as follows after executing the GeneratePath() function for three iterations as shown below:
[0067] GeneratePath(S1->S2, S2, S5)
[0068] GeneratePath(S1->S3, S3, S5)
[0069] GeneratePath(S1->S4, S4, S5)
[0070] In the first iteration, execution of GeneratePath(S1->S2, S2, S5) will end without adding any path to PathList. However, in the second iteration, execution of GeneratePath(S1->S3, S3, S5) will add S1->S3->S5 to the PathList. Further, execution of GeneratePath(S1->S3, S4, S5) will add S1->S4->S5 to the PathList. Thus, the PathList will contain two paths: S1->S3->S5 and S1->S4->S5
[0071] The method depicted in FIG. 8 may use the generated PathList to identify the optimal path for navigating from the current screen to the target screen.
[0072] However, in another example, if S3 and S4 include a bi-directional connection in addition to the connections shown in FIG. 5, the function GeneratePath() may execute as described herein below. As noted previously, for generating all paths from S1->S5, GeneratePath() may be called with Path == null, FromScreen == S1 and TargetScreen == S5. In one embodiment, the function starts with an empty PathList. Further, SortedScreenList will have three entries with from screen as S1. Thus, three intermediate paths, S1->S2 , S1->S3 ,S1->S4, will be generated after executing the GeneratePath() function for three iterations as shown below:
[0073] GeneratePath(S1->S2, S2, S5)
[0074] GeneratePath(S1->S3, S3, S5)
[0075] GeneratePath(S1->S4, S4, S5)
[0076] In the first iteration, execution of GeneratePath (S1->S2, S2, S5) will end without adding any path to PathList as in the earlier example. In the next iteration, execution of GeneratePath(S1->S3, S3, S5) will add S1->S3->S5 to the PathList and will call GeneratePath(S1->S3->S4, S4, S5). Further, execution of GeneratePath(S1->S4, S4, S5) will add S1->S4->S5 to the PathList, while calling GeneratePath(S1->S4->S3, S3, S5). Similarly, execution of GeneratePath( S1->S3->S4, S4, S5) will add S1->S3->S4->S5 to the PathList, while rejecting S4->S3 to avoid a loop as the identified path S1->S3->S4 includes the current screen S3 (to screen). Moreover, execution of GeneratePath(S1->S4->S3, S3, S5) will add S1->S4->S3->S5 to the PathList but will reject S3->S4 as the identified path S1->S4->S3 includes S4 (to screen). . Subsequently, the PathList will include four paths: S1->S3->S5, S1->S4->S5, S1->S3->S4->S5 and S1->S4->S3->S5.
[0077] With returning reference to FIG. 8, the method depicted in FIG. 8 may use the generated PathList to identify the optimal path at step 812 for navigating from the current screen to the target screen via TargetPath, while executing test cases with improved exception handling. Further, at step 814, it may be determined if there are any more transitions in TargetPath. If there are no further transitions, at step 816, it is determined if the current screen is the target screen. If so, the method exits at step 818. However, if the current screen is not the target screen, the control passes to step 806, where appropriate exception EHG scripts are executed to handle the unknown or unexpected screen.
[0078] Alternatively, at step 814, if there are additional transitions in TargetPath, the next transition may be achieved by performing designated user action and/or executing a user defined script. Particularly, at step 820, the value of CurrentScreen is replaced with "to screen" field in the screen transition entry present in the sortedScreenList. Furthermore, CPToVerifyScreen is initialized with all checkpoints to verify the new CurrentScreen. Moreover, at step 822, it is determined if there are any checkpoint items remaining to be verified in CPToVerifyScreen. If so, the checkpoint is verified. Once the checkpoint is verified, at step 824, the next checkpoint in CPToVerifyScreen is identified for verification and is subsequently verified. If there are not more checkpoints, the control passes to step 814 where further transitions in TargetPath may be evaluated. At step 826, it is determined if any checkpoint fails verification. If a checkpoint fails verification, the control passes to 802, where the screen identification and exception handling scripts stored in the SSnT 204 and EHG 212 are executed to catch any unexpected scenarios during testing.
[0079] Embodiments of the present systems and methods, thus, provide an effective mechanism to resolve exceptions while automating test execution for interactive testing at a central or common point. Specifically, use of the SSnT 204 and the EHG 212 provides the knowledge of various screens, partial screen transitions, and exceptional cases graphically in a machine understandable form, thus proving to be an effective tool for verifying system transitions and rules in a macro level by both machine and humans. Particularly, moving the unexpected event handling to a central or common point in the TAS 110 provides for a highly reusable test automation suite that is easy to maintain even when product design or platform changes. Use of the claimed TAS 110 aids in faster and more cost-effective development of test automation scripts and test execution by obviating unnecessary screen transitions and/or rebooting, while leaving the test automation engineers to focus on individual test cases.
[0080] It may be noted that the foregoing examples, demonstrations, and process steps that may be performed by certain components of the present systems, for example, by the TMS 108, the TAS 110 may be implemented using hardware, firmware, and/or suitable code on a processor-based system, such as a general-purpose or a special-purpose computer. It may also be noted that different implementations of the present specification may perform some or all of the steps described herein in different orders or substantially concurrently.
[0081] Additionally, various functions and/or method steps described in may be implemented in a variety of programming languages, including but not limited to Ruby, Hypertext Pre-processor (PHP), Perl, Delphi, Python, C, C++, or Java. Such code may be stored or adapted for storage on one or more tangible, machine-readable media, such as on data repository chips, local or remote hard disks, optical disks (that is, CDs or DVDs), solid-state drives, or other media, which may be accessed by the processor-based system to execute the stored code.
[0082] Although specific features of various embodiments of the present systems and methods may be shown in and/or described with respect to some drawings and not in others, this is for convenience only. It is to be understood that the described features, structures, and/or characteristics, and any subset thereof, may be combined and/or used interchangeably in any suitable manner in the various embodiments, for example, to construct additional assemblies and techniques for use various test automation systems.
[0083] While only certain features of the present systems and methods have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention. ,CLAIMS:1. A method for improved exception handling in a test automation system configured to evaluate a device under test, comprising:
generating a screen transitions database comprising definition of one or more unique screens in the device under test, one or more known checkpoints that identify each of the unique screens, and one or more partial transition paths from each of the unique screens to all immediately neighboring screens;
generating a global exception handling guidance database comprising one or more instructions to be executed when the test automation system encounters an unknown screen or one of the unique screens unexpectedly during execution of a test script on the device under test;
performing automated testing on the device under test, wherein the automated testing comprises:
identifying a current screen by iteratively comparing checkpoints in the current screen with the known checkpoints in one or more of the unique screens;
recursively determining one or more complete transition paths between the identified screen and a target screen defined in the test scripts using the partial transition paths stored in the screen transitions database;
selecting a desired transition path from the complete transition paths based on one or more of a user preference and a predefined configuration;
navigating to the target screen via the selected transition path; and
executing one or more test scripts for evaluating a desired functionality related to one or more of the current screen, one or more intermediate screens, and the target screen such that the test automation system centrally executes the specific instructions in the global exception handling guidance database upon encountering an unknown screen or one of the unique screens unexpectedly during execution of the test scripts.
2. The method as claimed in claim 1, wherein the screen transitions database further comprises one or more test scripts representing at least one sequence of one or more events, hot keys, user actions, or combinations thereof, for use in automated testing of desired functionality related to each of the unique screens.
3. The method as claimed in claim 1, wherein generating the global exception handling guidance database comprises defining and storing distinct sets of error handling instructions corresponding to each of the unique screens that appear unexpectedly and any unknown screen.
4. The method as claimed in claim 1, wherein the error handling instructions corresponding to each of the unique screens that appear unexpectedly and any unknown screen comprise instructions to continue execution of the test scripts, to terminate execution of the test scripts, to navigate to a predefined screen, or combinations thereof.
5. The method as claimed in claim 1, wherein identifying the current screen comprises:
incrementally adding one or more checkpoints that are determined to be present in the current screen in a particular iteration to a first list and one or more checkpoints that are determined to be absent from the current screen in the particular iteration to a second list, and
iteratively comparing the current screen with one or more of the unique screens using the first list and the second list.
6. The method as claimed in claim 1, wherein selecting the desired transition path comprises selecting the shortest complete transition path between the current screen and the target screen.
7. The method as claimed in claim 1, wherein recursively determining one or more complete transition paths between the identified screen and the target screen comprises identifying the intermediate screens having one or more derived checkpoints between the current screen and the target screen based on the partial transition paths, wherein the derived checkpoints comprise one or more checkpoints present in the current screen and the target screen.
8. The method as claimed in claim 1, further comprising reporting one or more of the screen transitions database and the global exception handling guidance database in one or more of a machine understandable and visual format for use in one or more of a manual verification and a machine verification of system transitions and rules.
9. The method as claimed in claim 1, further comprising updating one or more of the screen transitions database and the global exception handling database based on an output of the automated testing on the device under test.
10. The method as claimed in claim 1, further comprising updating one or more of the partial transition paths corresponding to only one or more of the unique screens that are modified following an update of the device under test.
11. A test automation system (100, 200) configured to evaluate a device under test (102), comprising:
a screen transitions database (204) comprising definition of one or more unique screens in the device under test (102), one or more known checkpoints that identify each of the unique screens, and one or more partial transition paths from each of the unique screens to all immediately neighboring screens;
a global exception handling guidance database (212) comprising one or more instructions to be executed when the test automation system (100, 200) encounters an unknown screen or one of the unique screens unexpectedly during execution of a test script on the device under test (102);
an execution engine (108, 116) operatively coupled to one or more of the device under test (102), the screen transitions database (204), and the global exception handling guidance database (212) and configured to perform automated testing on the device under test (102), wherein the automated testing comprises:
identifying a current screen by iteratively comparing checkpoints in the current screen with the known checkpoints in one or more of the unique screens;
recursively determining one or more complete transition paths between the identified screen and a target screen defined in the test scripts using the partial transition paths stored in the screen transitions database;
selecting a desired transition path from the complete transition paths based on one or more of a user preference and a predefined configuration;
navigating to the target screen via the selected transition path; and
executing one or more test scripts for evaluating a desired functionality related to one or more of the current screen, one or more intermediate screens, and the target screen such that the test automation system centrally executes the specific instructions in the global exception handling guidance database (212) upon encountering an unknown screen or one of the unique screens unexpectedly during execution of the test scripts.
12. The system (100, 200) as claimed in claim 11, further comprising:
a reporting module (132) configured to output one or more of the screen transitions database and the global exception handling guidance database (212) in one or more of a machine understandable and visual format for use in one or more of a manual verification and a machine verification of system transitions and rules; and
a graphical user interface (122) configured to provide a user with access to configure different test suites for testing desired functionality related to one or more of the unique screens, selecting the desired complete transition path between the current screen and the target screen, or a combination thereof.
| # | Name | Date |
|---|---|---|
| 1 | 4821-CHE-2015-IntimationOfGrant10-06-2022.pdf | 2022-06-10 |
| 1 | Power of Attorney [10-09-2015(online)].pdf | 2015-09-10 |
| 2 | 4821-CHE-2015-PatentCertificate10-06-2022.pdf | 2022-06-10 |
| 3 | Description(Provisional) [10-09-2015(online)].pdf | 2015-09-10 |
| 3 | 4821-che-2015-CLAIMS [22-06-2020(online)].pdf | 2020-06-22 |
| 4 | 4821-che-2015-FER_SER_REPLY [22-06-2020(online)].pdf | 2020-06-22 |
| 5 | Description(Complete) [15-12-2015(online)].pdf | 2015-12-15 |
| 5 | 4821-CHE-2015-FORM 3 [22-06-2020(online)].pdf | 2020-06-22 |
| 6 | Assignment [15-12-2015(online)].pdf | 2015-12-15 |
| 6 | 4821-CHE-2015-FORM-26 [22-06-2020(online)].pdf | 2020-06-22 |
| 7 | Form-2(Online).pdf | 2016-09-30 |
| 7 | 4821-CHE-2015-PETITION UNDER RULE 137 [22-06-2020(online)].pdf | 2020-06-22 |
| 8 | Form26_General Power Of Attorney_04-10-2019.pdf | 2019-10-04 |
| 8 | 4821-CHE-2015-FER.pdf | 2019-12-23 |
| 9 | Form1_After Filing_04-10-2019.pdf | 2019-10-04 |
| 9 | Correspondence by Agent_Form1_Form26_Declaration_04-10-2019.pdf | 2019-10-04 |
| 10 | Declaration_After Filing_04-10-2019.pdf | 2019-10-04 |
| 11 | Correspondence by Agent_Form1_Form26_Declaration_04-10-2019.pdf | 2019-10-04 |
| 11 | Form1_After Filing_04-10-2019.pdf | 2019-10-04 |
| 12 | 4821-CHE-2015-FER.pdf | 2019-12-23 |
| 12 | Form26_General Power Of Attorney_04-10-2019.pdf | 2019-10-04 |
| 13 | 4821-CHE-2015-PETITION UNDER RULE 137 [22-06-2020(online)].pdf | 2020-06-22 |
| 13 | Form-2(Online).pdf | 2016-09-30 |
| 14 | 4821-CHE-2015-FORM-26 [22-06-2020(online)].pdf | 2020-06-22 |
| 14 | Assignment [15-12-2015(online)].pdf | 2015-12-15 |
| 15 | 4821-CHE-2015-FORM 3 [22-06-2020(online)].pdf | 2020-06-22 |
| 15 | Description(Complete) [15-12-2015(online)].pdf | 2015-12-15 |
| 16 | 4821-che-2015-FER_SER_REPLY [22-06-2020(online)].pdf | 2020-06-22 |
| 17 | 4821-che-2015-CLAIMS [22-06-2020(online)].pdf | 2020-06-22 |
| 17 | Description(Provisional) [10-09-2015(online)].pdf | 2015-09-10 |
| 18 | 4821-CHE-2015-PatentCertificate10-06-2022.pdf | 2022-06-10 |
| 19 | Power of Attorney [10-09-2015(online)].pdf | 2015-09-10 |
| 19 | 4821-CHE-2015-IntimationOfGrant10-06-2022.pdf | 2022-06-10 |
| 1 | SEARCHSTRATEGY_4821_CHE_2015_20-12-2019.pdf |