Abstract: The invention discloses an Automation Re-usable Core (ARC) framework 102 that delivers a unified, low-code/no-code solution for automating sanity, regression and end-to-end tests across web, mobile and API interfaces. ARC adopts a modular, layered architecture—comprising a Feature layer 202, Step-Definition layer 204, Page-Object layer 206, API layer 208, Test-Data layer 210, Utility/Keyword layer 212, Test-Runner layer 214, Execution layer 216, a CI/CD-integration layer served by a CI server 106, and a Reporting layer 220—that affords plug-and-play reuse, rapid maintenance and seamless extensibility. A central driver factory within layers 206 and 208 abstracts browser and mobile drivers; the configurable data layer 210 accepts static files, relational or NoSQL databases and message queues; and built-in retry and environment-selection logic embedded in the Test-Runner layer 214 stabilise executions. The framework interfaces generically with an external version-control repository 104, the CI server 106, a test-management system 112 and an artefact repository 114, thereby enabling fully automated pipelines from code commit to defect logging. Its low-code interface lets non-programmers author Gherkin scenarios that bind to the reusable keywords of layer 212, democratising test creation while safeguarding intellectual property through controlled repository access. Accordingly, the ARC framework 102 provides a scalable, platform-agnostic and secure test-automation mechanism for modern software-delivery environments.
DESC:METHOD AND SYSTEM FOR A UNIFIED AUTOMATION-TESTING FRAMEWORK
TECHNICAL FIELD OF THE INVENTION
[0001] The present invention relates generally to the field of software test automation. More specifically, the invention relates to a reusable, scalable, and extensible automation framework that can be used to automate sanity, regression, and end-to-end testing of software applications across Web, Mobile, and API interfaces.
BACKGROUND
[0002] In modern software development environments, where rapid deployment, scalability, and efficiency are core requirements, test automation plays a pivotal role in ensuring the quality and reliability of the software products. The adoption of Agile methodologies, DevOps pipelines, microservices architecture, and multi-platform applications has simplified software test automation.
[0003] However, existing software test automation frameworks often lack adaptability and require extensive manual intervention, leading to inefficiencies in the testing lifecycle. Traditionally, enterprises have implemented separate frameworks for different domains—one for web UI testing, another for mobile testing, and yet another for API testing. These independent setups lead to a lack of reusability, redundancy in development and maintenance efforts, and inconsistency across test platforms (web, mobile, API).
[0004] Moreover, these test automation frameworks typically require high competence in automation scripting skills and offer limited support for low-code automation. This creates a bottleneck in quality assurance (QA) teams where non-programmers struggle to participate in test creation and management. As a result, test automation often becomes siloed, and time-consuming.
[0005] Another significant challenge lies in the integration of test frameworks with enterprise toolchains—including CI/CD tools like Jenkins, version control systems like Git/Bitbucket, test case management systems like Jira, and cloud device platforms like BrowserStack. Integrating these components manually is cumbersome and error-prone, and their support across traditional frameworks is often inconsistent or partial.
[0006] Furthermore, test data management remains a major issue in enterprise test automation. Test cases that rely on hardcoded data often break down with environment changes, leading to unstable tests and reduced trust in automation results. Security and IP protection also pose critical concerns. With distributed teams, external vendors, and outsourced QA services, there is a need for tightly controlled access to the automation framework, along with legal and procedural safeguards to ensure the confidentiality of proprietary test infrastructure.
[0007] For example, the Chinese patent application CN116521535A discloses a UI-automation test platform that integrates Selenium-Grid, Appium and Jenkins to provide zero-code test-case authoring and distributed execution, but confines its scope to UI testing and offers only limited support for API workflows and database validation. Likewise, US12174731 B2 teaches an automated framework dedicated to testing virtual-assistant skills within a continuous-integration pipeline; its mechanisms rely on skill-specific test suites and NLP-centric datasets, making the solution unsuitable for generic web or mobile regression suites. Indian patent application 202121037501 proposes a behaviour-driven test-automation framework that blends BDD and TDD practices, yet it requires separate driver stacks for web, mobile and API layers, thereby re-introducing redundancy in driver maintenance and configuration
[0008] Consequently, present-day frameworks either (i) focus on a single interface type (web or mobile or API), (ii) demand high programming skill and substantial manual scripting, or (iii) provide partial DevOps integration that still leaves testers to stitch together CI servers, test-management systems and cloud device farms manually. None of the above-mentioned prior arts offers an intelligent low-code/no-code solution that unifies driver management, dynamic test-data generation, accessibility scanning, CI/CD orchestration and environment-agnostic execution within a single reusable core.
[0009] In summary, while test automation is essential to modern software delivery, the current landscape is fragmented, and difficult to scale. Enterprises require a unified, reusable, secure, and intelligent automation framework that can adapt to various platforms while minimizing manual effort. Therefore, in view of the foregoing, the proposed invention provides a unified Low Code No Code Automation Framework for Cross-Platform End-to-End Testing to addresses the aforementioned challenges by delivering a solution that has a layered architecture capable of handling test generation, execution, validation, and reporting for Web, Mobile, and API-based systems through a single framework.
OBJECTS OF THE INVENTION
[0010] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0011] The principal object of the present invention is to provide a platform-agnostic automation-testing framework that is compatible with Web, Mobile and API.
[0012] Another object of the present invention is to provide unified driver management, by maintaining a common driver factory, abstracting browse r and mobile driver initialization
[0013] Yet another object of the present invention is to ensure seamless switching between test types by providing a centralized configuration for browser types, mobile devices, environments, and API endpoints.
[0014] Yet another object of the present invention is to facilitate cross-platform reporting through a unified reporting structure that includes web, mobile, and API results in a single test execution report.
[0015] Yet another object of the present invention is to enable execution of the same automation scripts across multiple environments.
SUMMARY OF THE INVENTION
[0016] The following presents a simplified summary of some embodiments of the invention in order to provide a basic understanding of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some embodiments of the invention in a simplified form as a prelude to the more detailed description that is presented later.
[0017] In accordance with one embodiment, the present invention provides an automation-testing framework that integrates modern DevOps practices and enhances software reliability by automating test case generation, execution, and result verification through a Low Code/No Code approach.
[0018] According to the preferred embodiment, unified, reusable, and intelligent software test automation framework, referred to as ARC (Automation Reusable Core), is developed for end-to-end testing of web, mobile, and API interfaces. It provides a modular design using Low Code/No Code principles to allow non-technical users and testers to create and execute test cases with minimal effort.
[0019] The framework is integrated with industry-standard tools such as Jenkins for CI/CD, Jira for testcase and defect tracking, BrowserStack for mobile execution, and Bitbucket for version control. Test data is handled flexibly via Excel, JSON, XML, and via databases such as MySQL, Oracle, and Aerospike. The invention also includes an intelligent retry mechanism, auto-test data generation, accessibility testing modules, and configurable environment support.
[0020] The ARC framework is composed of multiple layers, each responsible for a distinct function within the test lifecycle. These layers are designed to work together in a plug-and-play manner, allowing flexibility and extensibility across different projects, application types, and execution environments.
[0021] The framework is designed to be highly secure, and it is maintained in a controlled repository with access governed by strict legal agreements and confidentiality provisions.
BRIEF DESCRIPTION OF DRAWINGS
[0022] The accompanying drawings, which are incorporated, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and sys- in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0023] FIG. 1 shows an exemplary system 100 for implementing a method for providing an Automation Re-usable Core (ARC) framework 102.
[0024] FIG. 2 depicts various components/layers 200 of the ARC Framework 102.
[0025] FIG.3 depicts a Flow-diagram 300 illustrating the unified test-execution process implemented by the Automation Re-usable Core (ARC) framework 102.
[0026] List of reference numerals
100 System
102 Automation Re-usable Core (ARC) framework
104 Version-control repository
106 CI/CD Server
108 Script execution
110 Repository Module
112 Test Management System
114 Artifact Repository
200 ARC Framework Components
202 Feature Layer
204 Step Definitions
206 Page Object/Layer
208 API Layer
210 Test Data Layer
212 Utility/Keyword Layer
214 Test Runner
216 Execution Layer
218 CI/CD Integration
220 Reporting
DETAILED DESCRIPTION OF THE INVENTION
[0027] Various embodiments described herein are intended only for illustrative purposes and subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the scope of the present invention. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
[0028] The use of terms “including,” “comprising,” or “having” and variations thereof herein are meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the terms, “an” and “a” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0029] Referring now to figure 1. In accordance with the preferred embodiment of the present invention, a system 100 comprises the Automation Reusable Core (ARC) Framework 102, referred to as “ARC Framework” or “the Framework” hereinafter, and various other utilities or industry standard tools 104 – 114 integrated with the said framework 102.
[0030] The automation-testing framework described herein may be realised by, or executed on, one or more processors communicatively coupled to a non-transitory memory. The memory stores computer-readable instructions which, when executed by the processors, cause the processors to instantiate the functional layers of the Automation Re-usable Core (ARC) framework 102—namely the Feature layer 202, Step-Definition layer 204, Page-Object layer 206, API layer 208, Test-Data layer 210, Utility layer 212, Test-Runner layer 214, Execution layer 216, CI/CD interface and Reporting layer 220.
[0031] In at least one embodiment, the processors include a central-processing unit (CPU) and, optionally, a graphics-processing unit (GPU) or hardware accelerator. The memory may comprise volatile media such as static or dynamic RAM and/or non-volatile media such as flash, EEPROM or a solid-state drive. Together the processors and memory form an electronic device operatively coupled to peripheral interfaces for network, storage and user input.
[0032] The foregoing hardware components can be embodied in a single computing device (e.g. a developer workstation or dedicated test appliance) or distributed across multiple virtual machines in a cloud environment.
[0033] The ARC Framework 102 cooperates with a suite of supporting utilities/industry standard tools 104 to 114 that together establish an end-to-end automation pipeline. A version-control repository 104 (e.g., Bitbucket) hosts all framework source, reusable libraries and configuration artefacts. Commits to this repository are polled by a CI/CD server 106 (e.g., Jenkins), which compiles the libraries, packages the tests and triggers execution on a cross-device / cross-browser platform 108 (e.g., BrowserStack) so that web, mobile and API scenarios run across heterogeneous environments. Throughout execution a reporting module 110 produces Cucumber and customised Extent-HTML reports, captures screenshots and consolidates API request/response logs; the compiled report set is automatically e-mailed to stakeholders. Result status and defect information are synchronised with a test-management system 112 (e.g., Jira), thereby providing real-time visibility. All generated binaries, reports and data files are finally archived in an artefact repository 114, ensuring traceability and reuse across SIT, UAT and production environments.
[0034] Referring now to Fig. 2. According to the preferred embodiment, the ARC framework 102 is composed of multiple layers 200, each responsible for a distinct function within the test lifecycle. These layers are designed to work together in a plug-and-play manner, allowing flexibility and extensibility across different projects, application types, and execution environments. Each of the layers of the ARC framework 102 is described below in more detail.
[0035] At the top of the stack is the Feature Layer 202, which supports test scenario definition using Gherkin, a plain text human-readable language. This supports Behavior-Driven Development (BDD), allowing business stakeholders, QA engineers, and developers to collaborate effectively on defining acceptance criteria. Each testcase is written in a Given-When-Then format and saved as a .feature file. This ensures that test cases remain easily readable, understandable, and configurable even by non-programmers.
[0036] Step Definition layer 204 acts as the glue between the human-readable feature files and the actual automation logic. Each line in a .feature file is mapped to a corresponding Java method using Cucumber annotations. This mapping is highly modular, enabling the reuse of step definitions across different scenarios and feature files.
[0037] The Page Object layer 206 is used for UI automation with central repository to store all the locators in external files CSV files. The element locators are stored in CSV files, which are loaded dynamically at runtime. This allows changes to the UI to be managed without modifying test logic, improving the maintainability of the framework. Additionally, this layer provides reusable methods for interacting with page elements, such as clicking buttons, entering text, scrolling, and verifying UI elements.
[0038] API Layer 208 enables comprehensive API testing using Karate DSL, RestAssured, and API ProbeX. The framework supports testing of REST, SOAP and GraphQL APIs. API test execution is integrated with the test runner used for UI and mobile tests, allowing end-to-end workflows that span multiple interfaces, to be automated seamlessly.
[0039] Test Data Layer 210 is a flexible interface that provides the ability to manages the data in a centralized manner. Data layer supports managing static data (Excel, JSON, XML files), Dynamic data (MySQL, Oracle, Aerospike databases), and Messaging platforms such as Kafka.
[0040] Utility and Keyword layer 212 consists of reusable keyword functions and utility classes. These utilities are invoked by step definitions and test logic to provide a standardized mechanism for repeated operations.
[0041] Test Runner Layer 214 executes the tests using the TestNG framework (integrated with Cucumber) which allows for parallel execution and test grouping. This enables flexibility in selecting test suites depending on the need or pipeline stage. Retry logic is configurable and allows automatic rerun of failed test cases based on failure classification.
[0042] Execution Layer 216 manages test execution across various execution environments (local, or cloud).
[0043] CI/CD Integration Layer 218 provides seamless integration with Jenkins and other CI tools for pipeline-based automation. Test execution can be triggered via code commits, scheduled cron jobs, post-deployment hooks, and manual triggers. Test outcomes, logs, and metrics are sent back to Jenkins dashboards and emailed to stakeholders for real-time feedback.
[0044] Reporting Layer 220 handles the report generation part of the framework. Custom reporting is handled via Extent Reports and Cucumber HTML Reports, and provide detailed insights for each testcase, which include screenshots on failure, API request and response payloads, status metrics (pass/fail/skipped), and links to Jira tickets or stories. Reports can be auto sent via email or attached to tickets in Jira, thereby streamlining collaboration between QA and development teams.
[0045] The framework 102 provides a Low Code / No Code (LCNC) Interface. The LCNC capability allows users with minimal programming skills to define test scenarios in .feature files and call pre-built keywords and reusable components. This allows non-technical users such as manual testers, business analysts, product owners, and UAT stakeholders to efficiently use the test automation framework. It fosters quick test coverage expansion and democratizes test automation.
[0046] The framework 102 has built-in logic to identify transient test failures and re-execute them. The retry logic is based on failure codes, tags, or exception types. Results of each retry attempt are recorded in the reports.
[0047] The present invention includes advanced database and backend validation capabilities. It includes direct query execution on SQL and NoSQL databases, polling of message queues and brokers (Kafka) to check real-time data processing and comparing of frontend values with backend records. These features ensure true end-to-end testing and validating not just UI and APIs but also the backend workflows and data consistency.
[0048] According to an embodiment, the framework 102 supports multi-environment configurations, such as SIT, UAT, and pre-production. It uses encrypted configuration files and environment variables to dynamically adapt the test execution without needing any script modification. This enables organizations to use the same automation suite across different environments reliably.
[0049] According to an embodiment, the ARC framework 102 is versioned and maintained in a secure Bitbucket repository with controlled access policies. This ensures that the core team holds admin rights, general users have read-only access, and contractors and partners access the framework only under an NDAs. Usage is monitored to ensure compliance, protect intellectual property, and maintain the confidentiality of the proprietary framework components.
[0050] Referring now to Fig. 3 which illustrates the sequence performed by the Framework 102 in a single execution cycle. At step 302 a continuous-integration/continuous-delivery server 106 detects either a source-code commit or a scheduled timer event and therefore initiates the pipeline. Proceeding to step 304, that same server checks out the latest scenario file together with its central configuration file from the version-control repository 104 and supplies both artefacts to the Feature layer 202. In step 306 the Step-Definition layer 204 parses each plain-language statement of the scenario and binds the statement to a corresponding executable step definition, thereby translating the declaration into runnable code. Control then shifts to step 308, where the common driver factory, located respectively in the Page-Object layer 206 for user-interface drivers and the API layer 208 for service drivers, instantiates the appropriate browser, mobile emulator or API client as dictated by the central configuration file. At step 310 the Test-Data layer 210 retrieves the data set specified in the scenario, drawing from one or more of CSV/JSON files, relational databases, NoSQL stores or message-queue. Next, in step 312, the Utility/Keyword layer 212 executes the mapped keywords while the Test-Runner layer 214 coordinates the overall sequence and records each outcome. Decision block 314 determines whether the most-recent step has failed. If the decision is Yes, built-in retry logic routes control back to step 312 and re-executes the failed step once or until a predefined retry limit is reached; if the decision is No, processing advances to step 316, where the CI/CD server 106 posts the execution status to the external test-management system 112 and archives all artefacts in the repository 114. Finally, at step 318, the Reporting layer 220 consolidates results from web, mobile and API executions into a single unified report, after which the flow terminates, thereby completing the fully automated, low-code/no-code test-execution cycle.
[0051] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. ,CLAIMS:We Claim:
1. A computer-implemented unified testing system 100 to enable running a plurality of test scenarios across Web, Mobile and API platforms, the system 100 comprising:
i. an Automation Re-usable Core (ARC) framework 102; wherein the framework 102 further comprises:
a) a Feature Layer 202 for receiving a plurality of plain-language test scenarios;
b) a Step-Definition Layer 204 for mapping each scenario statement to executable code;
c) a Page-Object layer 206 and an API layer 208, each including a common driver factory that, in response to a central configuration file retrieved from a version-control repository 104, initialises a browser driver, a mobile-device driver, or an API client;
d) a Test-Data Layer 210 selectable to obtain datasets;
e) a Utility/Keyword Layer 212 exposing reusable keywords;
f) a Test-Runner Layer 214 that facilitates keyword execution and automatically retries a failed step when the failure matches a predefined retry class;
g) an Execution Layer 216 for local or cloud execution; and
h) a CI/CD Integration Layer 218 to provide seamless integration with CI/CD tools; and
i) a Reporting Layer 220 that aggregates results from web, mobile and API executions into a single report;
ii. a version-control repository 104 to host all framework source, reusable libraries and configuration artefacts.
iii. a continuous-integration server 106 configured to detect a code-commit or scheduled trigger, fetch the scenario file and the central configuration file from the repository 104, and invoke script execution on the Execution layer 216;
iv. a script execution server 108 to run test scenarios across heterogeneous environments.
v. a repository module 110 operable to transmit pass/fail status to a test-management system 112 and to archive the report and associated artefacts in an artefact repository 114.
2. The system of claim 1, wherein the Test-Runner layer 214 re-executes a failed step automatically when the failure is classified as element-not-found, network time-out or session-expired, up to a user-defined retry limit.
3. The system of claim 1, wherein the Reporting layer 220 embeds screenshots captured by the Page-Object layer 206, request-and-response bodies captured by the API layer 208, and execution logs captured by the Execution layer 216 into the single report archived by the repository module 110.
4. A computer-implemented method for unified automation testing, the method comprising:
triggering an automation run with a continuous-integration server 106 upon detecting a code commit or scheduled event;
retrieving, from a version-control repository 104, (i) a scenario file authored in the Feature layer 202 and (ii) a central configuration file;
mapping each scenario statement, in the Step-Definition layer 204, to an executable step definition;
initialising, via a common driver factory of the Page-Object layer 206 or the API layer 208, a browser driver, mobile-device driver or API client in accordance with the central configuration file;
fetching test data, in the Test-Data layer 210, from at least one of static files, relational databases, NoSQL stores and message-broker queues;
executing mapped keywords through the Utility/Keyword layer 212 under orchestration of the Test-Runner layer 214;
evaluating, at the Test-Runner layer 214, whether a step has failed and, when the step is retry-eligible, re-executing the step until success or retry limit;
generating a unified execution report in the Reporting layer 220; and
posting execution status to a test-management system 112 and archiving the unified report and related artefacts in an artefact repository 114,
thereby completing an automated, low-code/no-code test-execution cycle that is reusable across web, mobile and API interfaces.
| # | Name | Date |
|---|---|---|
| 1 | 202511044249-PROVISIONAL SPECIFICATION [07-05-2025(online)].pdf | 2025-05-07 |
| 2 | 202511044249-FORM 1 [07-05-2025(online)].pdf | 2025-05-07 |
| 3 | 202511044249-DRAWINGS [07-05-2025(online)].pdf | 2025-05-07 |
| 4 | 202511044249-Proof of Right [03-06-2025(online)].pdf | 2025-06-03 |
| 5 | 202511044249-FORM-26 [03-06-2025(online)].pdf | 2025-06-03 |
| 6 | 202511044249-FORM-5 [02-07-2025(online)].pdf | 2025-07-02 |
| 7 | 202511044249-FORM 3 [02-07-2025(online)].pdf | 2025-07-02 |
| 8 | 202511044249-DRAWING [02-07-2025(online)].pdf | 2025-07-02 |
| 9 | 202511044249-COMPLETE SPECIFICATION [02-07-2025(online)].pdf | 2025-07-02 |
| 10 | 202511044249-FORM-9 [11-07-2025(online)].pdf | 2025-07-11 |
| 11 | 202511044249-FORM 18 [11-07-2025(online)].pdf | 2025-07-11 |