Skip to main content

Autopilot Your Software Testing

· 26 min read
Marcel Veselka

Two years ago, Gartner forecasted a 30% growth in adopting autonomous testing approaches in the subsequent year. This prediction highlighted a significant shift in the software testing landscape, where organizations began to recognize the potential of leveraging automation to enhance testing efficiency and effectiveness. Last year, Forrester's report echoed this trend, estimating a remarkable 20-30% increase in testing productivity through testing bots. These signals point towards a growing interest in autonomous testing solutions, driven by the need for faster, more reliable, and cost-effective testing processes.

1. Journey into better testing

In 1979, Glenford J. Myers introduced the concept of separating debugging from testing. He focused on breakage testing, stating, "A successful test case detects an as-yet undiscovered error." This highlighted the software engineering community's desire to separate essential development activities, such as debugging, from verification. Over the past few decades, our industry has been searching for new ways to enhance the efficiency of all testing activities. We have progressed from manual testing to automated testing, and more recently, there has been discussion about autonomous testing.

Journey into better testing


Manual testing involves repetitive tasks performed by human testers. While it's quick to get started and easy to adjust, it falls short in fast-paced software development environments following strategies like Agile or DevOps. The primary drawbacks include the time-consuming nature of repetitive tasks and the ineffectiveness of regression testing, which requires running the same tests multiple times to ensure that recent code changes haven't introduced new bugs.

Automated testing employs scripted robots to replicate the actions of manual testers, allowing for the rapid execution of tests. This approach significantly reduces the time required to run repetitive tasks, ensuring that tests are conducted quickly and consistently. However, setting up and maintaining automated tests requires substantial initial investment in time and resources. Creating and updating test scripts can be labor-intensive, especially in dynamic environments where applications frequently change. Moreover, while automated tests excel at performing predefined actions and checking specific outcomes, they often fall short in assessing the overall quality and user experience of the application. This limitation arises because automated scripts are typically designed to mimic manual testing steps rather than adaptively improving test coverage and effectiveness based on insights gained from previous test runs. Consequently, while automation enhances efficiency, it may not fully capture the nuanced quality metrics that human testers can identify.

Autonomous testing represents the next evolution, combining autonomous bots, learning algorithms, and AI/ML. This approach automates test generation, requiring no programming skills, and simplifies testing processes by automating result analysis and maintenance activities. It's as simple as manual testing and as quick as automated testing, offering a compelling blend of efficiency and effectiveness.

Simplified comparison of manual, automated, and autonomous testing:

Manual TestingAutomated TestingAutonomous Testing
Quick to startRapid executionNo programming skills required
Easy to adjustConsistent resultsAutomated result analysis
Time-consumingLabor-intensiveSimplified maintenance
Ineffective regression testingLimited adaptabilityEnhanced test coverage
Subjective quality assessmentLimited quality metricsComprehensive quality insights

2. Playwright, Robot Framework, and

This article is based on our previous webinars, where we covered alternative approaches to achieving high efficiency in creating and maintaining automated tests for web apps. During the webinar, we provided examples using three test automation tools: Playwright, Robot Framework, and By combining our experiences with all three tools, we aim to offer additional value by comparing them in this concise article. We encourage readers to share their reactions by writing their own blog posts and comparing their experiences with other test automation tools.

Following section introduces the tools used in our demos for those who may not be familiar with them. You can skip it if you already know the tools. is a comprehensive end-to-end (E2E) testing framework for web applications. Built on a JavaScript framework, it provides live debugging for quick issue resolution. As an open-source tool with an active community and extensive documentation, stands as a strong rival to Selenium, offering unique advantages for modern web application testing. supports real-time reloads, automatic waiting, and detailed error messages, making it a user-friendly option for developers and testers.


Playwright is a cutting-edge E2E testing framework developed by Microsoft. It supports multiple browsers (Chromium, Firefox, and WebKit) and is designed for modern web application testing. Playwright allows for seamless parallel testing, handles multiple contexts for efficient testing, and offers network interception and geolocation testing. Like Cypress, it provides an intuitive API and robust support for debugging and automation.

Robot Framework

Robot Framework is a generic open-source automation framework used for test automation and robotic process automation (RPA). It uses a keyword-driven testing approach, making it accessible to both technical and non-technical users. Robot Framework supports integration with various tools and libraries, such as Selenium for web testing, and can be extended using Python or Java. Its structured format and clear syntax make it ideal for larger projects requiring detailed documentation and collaboration.

This table offers a concise summary of the strengths and use cases for each tool, which can help you select the most suitable option for your specific testing needs. While I strived to remain objective, it's important to note that this is my viewpoint and may be heavily biased.

Feature/ToolCypress.ioPlaywrightRobot Framework
LanguageJavaScript/TypeScriptJavaScript/TypeScriptKeyword-driven (extensible with Python/Java)
Browser SupportChromium-based browsersChromium, Firefox, WebKitAny browser via Selenium
Parallel TestingLimitedYesYes (with Selenium Grid or Pabot)
Live DebuggingYesYesLimited
Automatic WaitingYesYesYes (w. PW)
Network InterceptionNoYesYes (w. PW)
Community SupportStrong, active communityGrowing communityMature community
Ease of UseHigh (developer-friendly)High (developer-friendly)Medium (requires understanding of keyword-driven approach)
Primary Use CaseE2E testing for modern web applicationsE2E testing for modern web applicationsGeneral test automation and RPA
CI/CD IntegrationYesYesYes

Examples of test scripts for each tool are provided below to help you compare their syntax and structure. These examples are based on simple test scenarios to demonstrate the basic capabilities of each tool. For more advanced use cases, refer to the official documentation and community resources for each tool.

const { chromium } = require("playwright");

(async () => {
const browser = await chromium.launch();
const page = await browser.newPage();
await page.goto("");
await page.screenshot({ path: "example.png" });
await browser.close();

All the examples with are available in the repository Autopilot your testing. If you want to see the examples for Playwright and Robot Framework, please let me know, and I will clean it up and provide them as well.

3. Optimize Your Productivity: 7 Ideas

There might be several opportunities to increase the efficiency of authoring and maintaining your testing. These are a few I believe could help you to boost your efficiency:

A: Technical Assertions

1. Status code check: Ensuring that your application returns the correct HTTP status codes is crucial. All tools allow you to check the status code of HTTP requests.

2. Console check (JS errors & warnings): Monitoring the browser console for JavaScript errors and warnings can help catch issues that might not be immediately visible through UI tests. Implementing console checks in your tests ensures that any errors thrown during the execution are captured and addressed promptly.

3. Loading time check: Performance is a critical aspect of user experience. can be configured to measure and assert loading times for various resources, helping you ensure that your application meets performance standards.


Status code and console checks are built into, making it easy to verify endpoint responses, but not very flexible to customize. For the Playwright and Robot Framework it is more flexible however it needs to be implemented.

B: Visual Assertions

4. Increase coverage: Visual assertions can significantly increase test coverage by verifying that the UI appears correctly to the user. All tools could be easily combined with tools like Percy or Applitools. It can perform visual validations to ensure your application renders correctly across different browsers and devices. 5. Boost coding efficiency: Visual assertions can streamline your test automation efforts by reducing the code needed to validate UI changes. This efficiency gain allows your team to focus on more complex scenarios and edge cases, improving overall test quality.

C: Autonomous Interactions

6. Form filling on the fly: Automating form interactions can save significant time. can dynamically fill out and submit forms using custom commands. This approach reduces manual intervention and ensures consistency across tests.

7. BDD Copilot: Behavior-driven development (BDD) frameworks such as Cucumber can be integrated with to create test scenarios that are more readable and maintainable. BDD copilot tools can help generate step definitions and test data, simplifying the process of writing and managing BDD tests. BDD-like functionality is implemented as a custom command in, a class in Playwright, and by following the standard approach for Robot Framework, using keywords to implement steps.

4. Technical Assertions

The challenge of stable assertions

Assertions are the backbone of automated tests, but achieving stable and reliable assertions can be challenging. For instance, a test might pass if it detects a "Thank you for your order!" message but fails to verify the presence of critical UI elements like headers, footers, or buttons. This inconsistency can lead to unreliable test results and increased maintenance efforts.

See the following example (both will pass with an assertion to validate the text “Thank you for your order!” displayed):

a. Correct behavior – test passed

Check out - OK


b. Incorrect behavior – test passed

Check out - NOK


How to introduce it?

To introduce technical assertions, we have two options:

1. Modify existing tests: Introducing technical assertions into existing tests can enhance their flexibility and reliability.'s built-in capabilities for checking status codes and other technical metrics make this process straightforward, although it may require additional effort to implement and maintain.

2. Introduce a crawler: Web crawlers can complement your testing strategy by automatically navigating through your application and performing technical assertions. This approach is easy to implement and can effectively increase test coverage, although it may lack the flexibility of manually written tests.

Demo: Web Crawling with Technical Assertions

import { test } from "@playwright/test";
import { Crawling } from "./utils/crawler";

test("Crawl page", async ({ page }) => {
const crawler = new Crawling(page); // custom class to crawl the site
const urls = await crawler.crawlSite("");

Implementing Console Check (JS Errors & Warnings)

A simple web crawler can traverse your application and perform technical checks, such as identifying JavaScript errors and monitoring resource loading times. Implementing this setup is straightforward and can be achieved with minimal code or configuration changes, providing an effective solution for detecting technical issues early.

These are just a few examples of technical assertions you can implement in your tests. More could be found in the repositories for each tool, or you can create your own based on your specific requirements.

  private async _LoadingAutoAssert(
page: Page,
treshold: {
navigationTime: number;
domContentLoadedTime: number;
loadTime: number;
} = {
navigationTime: 500,
domContentLoadedTime: 500,
loadTime: 500,
) {
// Get the performance metrics
const metrics = await page.evaluate(() => {
const timing = performance.timing;
const response = performance.getEntriesByType("navigation")[0];
console.log("Performance timing:", response);

return {
navigationStart: timing.navigationStart,
responseEnd: timing.responseEnd,
domContentLoadedEventEnd: timing.domContentLoadedEventEnd,
loadEventEnd: timing.loadEventEnd,

// Calculate loading times
const navigationTime = metrics.responseEnd - metrics.navigationStart;
const domContentLoadedTime =
metrics.domContentLoadedEventEnd - metrics.navigationStart;
const loadTime = metrics.loadEventEnd - metrics.navigationStart;

if (
treshold.navigationTime < navigationTime ||
treshold.domContentLoadedTime < domContentLoadedTime ||
treshold.loadTime < loadTime
) {
console.log("Navigation time:", navigationTime, "ms");
console.log("DOMContentLoaded time:", domContentLoadedTime, "ms");
console.log("Load time:", loadTime, "ms");

// Check response.ok() returns true if the response status is 2xx
private async _StatusAutoAssert(response: Response | null) {
if (response && !response.ok()) {
console.log("Response status is not 2xx");

5. Visual Assertions

Why Visual Validation?

1. Increase Test Automation Coverage: Visual validation extends your test coverage by ensuring that the UI renders correctly for users. This approach can catch visual defects that might be missed by functional tests alone. On the following picture, you can see a comparison of two check-out pages. The first one is the correct one, and the second one is incorrect. With traditional functional tests, both pages would pass the test, but with visual validation, the second page would fail.

Increase Test Automation Coverage

Source: - stripe demo app.

2. Reduce Coding: Automating visual checks reduces the amount of code needed for UI validations, streamlining your test suite and making it easier to maintain. Visual assertions can simplify complex validation scenarios, such as ensuring that a UI element appears correctly after a series of interactions. On the following picture, you can see an example of how visual validation can reduce coding.

Reduce Coding

Source: -

3. Simplify Complex Assertions: Visual assertions can simplify complex validation scenarios, such as ensuring that a UI element appears correctly after a series of interactions. This simplification can improve test reliability and reduce maintenance overhead. On the following picture, there is an example of nice reach UI which can be easily validated with visual validation. Traditional functional tests would require a lot of code to validate this UI.

Simplify Complex Assertions

Source: - internet banking demo .

When Visual Testing might NOT be ideal

1. Potential for slower test execution: Each visual assertion requires capturing and comparing screenshots, which can slow down test execution. It's essential to balance the need for thorough visual checks with the performance impact on your test suite.

2. Risk of flaky tests: Incorrectly implemented visual assertions can lead to flaky tests that fail intermittently. Ensuring that your visual validation setup is robust and reliable is crucial to avoid these issues. Flaky tests can be caused by various factors, such as network latency, rendering differences between browsers, or dynamic content that changes frequently. It's essential to address these issues to maintain the stability of your test suite.

3. Unnecessary automation: In some cases, visual testing might be overkill, especially if most of your tests are manual. It's important to evaluate whether the benefits of visual validation justify the additional complexity.


If you are interested in more information about visual testing, you can read our article Getting Started with Playwright Visual Testing.

Demo: Visual Assertions with Wopee Library

Setting Up Visual Validation with Wopee Library

Install Wopee

npm i @wopee-io/

Set up configuration file

Configuration file require API URL, API key, and project ID. It could be stored as JSON, .env, yaml or directly in config file and looks like this (depending on the tool you use):

Example of .env file:


Run Your Tests

Execute your tests with the Wopee library to perform visual validations. The library will capture screenshots of your application and compare them against baseline images to detect any visual differences. If discrepancies are found, the library will flag them as visual bugs, allowing you to investigate and resolve them promptly. Here's an example of running tests with the Wopee library using Playwright,, or Robot Framework.

npx playwright

Above examples are simplified and may not cover all the necessary steps to run your tests. To run it properly often requires more advanced setup, like setting up a CI/CD pipeline. For more information, refer to the official documentation of the tool you are using.

Benefits of Wopee visual testing

  • Broaden Testing Coverage: Expand your testing horizons and ensure comprehensive coverage of your applications, catching issues that traditional testing might miss.
  • Simplify Automation for Complexity: Tackle intricate features with ease. Our Copilot simplifies the automation of even the most complex functionalities, streamlining your testing process.
  • Guard Against Visual Bugs: Safeguard your production environment by minimizing the risk of visual bugs slipping through undetected. Wopee assistant ensures the visual integrity of your applications.
  • Boost Testing Team Efficiency: Foster seamless collaboration among your team members. Our Visual Testing Copilot enhances teamwork, making it easier for your testing team to work together efficiently.

Try Wopee visual testing today and experience the benefits of comprehensive visual validation for your applications. Elevate your testing strategy and ensure the visual integrity of your software with Wopee.

Sign up for a free trial at

6. Autonomous Interactions

During our test automation projects, we often encounter challenges related to authoring and maintaining test scripts. These challenges can hinder productivity, increase maintenance overhead, and limit the scalability of test automation efforts. By introducing autonomous interactions into your testing strategy, you can streamline test creation, enhance test coverage, and improve the efficiency of your testing process.

Following section explores the benefits of autonomous interactions and provides examples of how you can leverage them in your test automation projects. All examples are based on our prototypes and are intended to showcase the potential of autonomous testing in enhancing test automation efficiency by leveraging AI and machine learning (LLM) models.

Challenges in Authoring and Maintenance

  1. Locator Issues: Identifying and maintaining locators can be particularly challenging, especially as applications undergo frequent changes and updates. This can lead to unstable tests that require constant attention.

  2. Frequent Application Changes: The dynamic nature of modern applications often necessitates frequent updates and modifications. Keeping test scripts aligned with these changes can be daunting, resulting in increased maintenance efforts and potential disruptions to the testing process.

  3. Maintenance Overhead: Maintaining existing tests can consume significant time and resources. This often leaves less time for developing new tests, hindering the ability to expand test coverage and keep pace with application development.


  1. Vercel AI Playground: The tool offers the ability to test various LLM models through a simple chat UI, allowing you to compare the speed and quality of the selected models side by side. Additionally, it provides cost estimations.

  2. GitHub Copilot: An AI-powered code completion tool developed by GitHub in collaboration with OpenAI, designed to assist developers by suggesting code snippets and entire functions as they type. It leverages machine learning to provide contextually relevant suggestions, enhancing productivity and streamlining the coding process.

Promting & prompt templates

  1. Prompting: The process of generating prompts to guide the AI model in producing relevant and accurate responses. By providing clear and concise prompts, you can direct the AI model to focus on specific tasks or topics, improving the quality of its output.

  2. Prompt Templates: Predefined templates that contain placeholders for variables or keywords that can be filled in with specific values or instructions. These templates serve as a starting point for generating prompts and can be customized to suit different use cases or scenarios.

Example of a prompt template for generating test steps:

I'm a test engineer writing tests in using Javascript.

I've opened a web page and want to fill in and submit (click on a button as a last step) the form on this page.

Use realistic test data (consider defined and typical validations) and locators from this HTML:

{{ html }}

Provide me with steps to accomplish it in JSON format. Example:

{ "step": 1, "locator": "#name", "value": "Marcel", "action": "fill" },
{ "step": 2, "locator": "#pswd", "value": "abc123", "action": "fill" }
{ "step": 3, "locator": "#submit", "action": "click" }

When creating prompts for AI models, it's essential to be clear and specific about the desired outcome. Providing detailed instructions and context can help the model generate accurate and relevant responses. Often I am using the same prompt for all tools, but sometimes I need to adjust it slightly to fit the specific tool.


I often use LLM model to help me to improve my prompt templates. It can suggest me how to improve the prompt to get better results. Simply use prompt template like this:

Improve, simplify, and make more specific following prompt template:
{{ prompt }}

How the "magic" works

We use the following approach to test with any Large Language Model (LLM) models. Here's how it works:

  1. Provide a Prompt: We give the LLM model a specific instruction or question, called a prompt (combining data + prompt template).
  2. Generate a JSON Response: The LLM model generates a response in JSON format, a structured way to represent data.
  3. Parse and Take Action: Our test script extracts the information from the JSON response (parses it) and performs the appropriate action based on the response.

We are using OpenAI model in our examples however this approach works with any LLM model, not just OpenAI's. We simply use a REST API (a standard communication method) to connect to the LLM model we're testing. By automating test step generation with LLMs, we save time and effort compared to manual creation, especially to create and maintain locators and test data.

How the 'magic' works

Demo: Autonomous form filling

Creating custom methods/keywords/commands to dynamically interact with forms can significantly enhance productivity. Here's an example of steps provided by the LLM model in JSON format:

"actions": [
"step": 1,
"locator": "input[name='user']",
"value": "",
"action": "fill"
"step": 2,
"locator": "input[name='password']",
"value": "SecurePass123!",
"action": "fill"
{ "step": 3, "locator": "button.btn.btn-main-sm", "action": "click" }

Here are a few examples how to implement it in different tools:

import { test } from "@playwright/test";
import { WopeeCopilot } from "./utils/ai";

// const baseURL = "";
const baseURL = "";
let wopeeCopilot: WopeeCopilot;

test.beforeEach(async ({ page }) => {
wopeeCopilot = new WopeeCopilot(page);

test.only("Login with valid credentials", async ({ page }) => {
await page.goto(baseURL);
await"#sign_in >> text=Sign in");
// await wopeeCopilot.action("Navigate to login page");

await wopeeCopilot.fillForm();

test("Login with valid credentials 2", async ({ page }) => {
await page.goto(baseURL);
// await"#sign_in >> text=Sign in");
await wopeeCopilot.action("Navigate to login page");
await wopeeCopilot.action("Fill in into the username field");
await wopeeCopilot.action("Fill in pswd!123 into the password field");
await wopeeCopilot.action("Submit the login form");

Demo: Autonomous BDD copilot

Our example integrate test tools with LLM to enhance the readability and maintainability of tests. Here's an example of a BDD scenario with autogenerated low-level code:

import { test } from "@playwright/test";
import { WopeeCopilot } from "./utils/ai";

// const baseURL = "";
const baseURL = "";
let wopeeCopilot: WopeeCopilot;

test.beforeEach(async ({ page }) => {
wopeeCopilot = new WopeeCopilot(page);

test.only("Login with valid credentials", async ({ page }) => {
await page.goto(baseURL);
await"#sign_in >> text=Sign in");
// await wopeeCopilot.action("Navigate to login page");

await wopeeCopilot.fillForm();

test("Login with valid credentials 2", async ({ page }) => {
await page.goto(baseURL);
// await"#sign_in >> text=Sign in");
await wopeeCopilot.action("Navigate to login page");
await wopeeCopilot.action("Fill in into the username field");
await wopeeCopilot.action("Fill in pswd!123 into the password field");
await wopeeCopilot.action("Submit the login form");

7. Conclusion: Reducing Test Automation Complexity

As the software testing landscape continues to evolve, autonomous testing emerges as a game-changer, promising unprecedented efficiency and effectiveness. By leveraging AI, machine learning, and advanced automation tools like, Playwright, and Robot Framework, organizations can streamline their testing processes, reduce maintenance overhead, and ensure higher quality software releases. Embracing autonomous testing not only addresses the limitations of manual and traditional automated testing but also paves the way for innovative testing strategies that adapt to the dynamic nature of modern software development.

The journey from manual to automated to autonomous testing signifies a paradigm shift in how we approach quality assurance. Autonomous testing offers a harmonious blend of simplicity, speed, and intelligence, making it an indispensable tool for today’s fast-paced development environments. As we look to the future, the potential for predictive test selection, self-healing locators, and smarter reporting will further solidify autonomous testing as a cornerstone of effective software development.


Testing bots

Alternatively to enhancing your traditional test automation, you can use a testing bot to simplify the process of creating and maintaining test scripts.

The bot can generate test steps, reducing the need for manual script writing and maintenance. By leveraging AI and machine learning, the bot can understand and interpret your testing requirements, generating accurate and relevant test steps to streamline your testing process.

Work with Us

At, we are committed to pioneering these advancements and providing cutting-edge solutions that empower your testing teams. Join us in embracing the future of testing and set your testing processes on autopilot today. With autonomous testing, you can achieve superior quality, faster release cycles, and a competitive edge in the ever-evolving tech landscape.

For more information and to start your journey with autonomous testing, sign up for a free trial at and experience the transformative power of our comprehensive testing solutions.