Automation QA Testing Course Content

Which API Framework is better RESTASSURED or KARATE

 Both RestAssured and Karate are popular frameworks for API testing, but they serve slightly different purposes and have different strengths and weaknesses. Here’s a comparison to help you decide which one might be better for your specific needs:

RestAssured

Pros:

  1. Java Integration: RestAssured is a Java-based library, making it a good choice if you're already working within a Java ecosystem. It can easily integrate with other Java tools and frameworks.
  2. Flexibility: RestAssured provides fine-grained control over HTTP requests and responses, which is useful for complex testing scenarios.
  3. Mature and Robust: It is a well-established tool with extensive documentation and a large user community.
  4. Support for BDD: RestAssured supports Behavior-Driven Development (BDD) style tests using Cucumber.

Cons:

  1. Verbose: Writing tests can be more verbose compared to Karate, as you need to write more code to achieve the same functionality.
  2. Steeper Learning Curve: For beginners, getting started with RestAssured might be more challenging due to its extensive configuration and setup requirements.

Karate

Pros:

  1. Ease of Use: Karate is designed to be simple and easy to use, with a low learning curve. It uses a domain-specific language (DSL) that is very readable and concise.
  2. All-in-One Framework: Karate integrates API testing, mocking, performance testing, and UI testing (via Selenium) into a single framework.
  3. No Coding Required: You can write tests without having to write any Java code, making it accessible for non-developers.
  4. Built-in Assertions: Karate comes with built-in support for JSON and XML assertions, which makes it very convenient for validating responses.

Cons:

  1. Less Flexibility: While Karate is very powerful for most standard API testing needs, it might lack the flexibility required for more complex testing scenarios compared to RestAssured.
  2. Limited Customization: Being a higher-level framework, it might not provide as much control over the HTTP requests and responses as RestAssured.
  3. Performance: For extremely large and complex test suites, RestAssured might perform better due to its fine-grained control and optimizations.

When to Use RestAssured

  • Java Ecosystem: If your project is heavily based on Java and you prefer writing tests in Java.
  • Complex Scenarios: When you need extensive control over your HTTP requests and responses.
  • BDD Support: If you want to integrate with Cucumber for BDD-style tests.

When to Use Karate

  • Simplicity and Speed: When you need to get started quickly with minimal setup and want to write readable, maintainable tests.
  • Non-Developers: If your testing team includes members who are not proficient in coding.
  • All-in-One Solution: When you want a single framework to handle API, UI, and performance testing without needing multiple tools.

Conclusion

Both frameworks have their strengths and can be the better choice depending on your specific requirements. If you are looking for ease of use, quick setup, and an all-in-one testing solution, Karate might be the better choice. On the other hand, if you need more control, flexibility, and integration with a Java ecosystem, RestAssured would be more suitable.

Evaluate your project needs, team skillset, and the complexity of your test scenarios to make the best choice.

=========================================

Why API Automation is better than UI Automation?

API automation is the process of automating interactions with Application Programming Interfaces (APIs) using scripts, tools, or software.

UI automation, or User Interface automation, refers to the process of using software or tools to simulate user interactions with a graphical user interface (GUI) of a software application or system.

Here are some reasons why API automation is considered better than UI automation in certain scenarios:

  • Faster Execution:

API tests tend to run faster than UI tests because they don’t involve rendering web pages or interacting with the user interface. API requests and responses are typically lightweight, leading to quicker test execution.

  • Stability:

UI tests can be fragile and sensitive to changes in the user interface, making them prone to breakage when UI elements or layouts change. API tests are generally more stable because they focus on the underlying business logic and data flow.

  • Coverage:

API tests can provide comprehensive coverage of the application’s functionality, including edge cases and error scenarios. UI tests may not cover all possible user interactions or inputs.

  • Efficiency:

API tests can be more efficient for repetitive tasks like regression testing or load testing because they require fewer resources and can be run in parallel.

  • Data Validation:

API tests are well-suited for data validation because they can easily verify the correctness of data returned by the API. UI tests may require additional effort to extract and validate data displayed on the user interface.

  • Cross-Platform and Cross-Browser Compatibility:

API tests are not dependent on specific browsers or platforms, making them more suitable for testing the backend of applications that need to work across different environments.

  • Early Detection of Issues:

API tests can be integrated into the development pipeline to catch issues early in the development process, promoting a shift-left approach to testing.

  • Reduced Maintenance:

UI automation scripts often require frequent updates due to changes in the user interface. API tests tend to have lower maintenance overhead since they are less affected by UI changes.

  • Scalability:

API automation can easily scale to test multiple endpoints, versions, or services in a micro-services architecture.

  • Isolation:

API tests can be isolated from the user interface, allowing you to test specific functionalities or endpoints independently. This isolation simplifies debugging and makes it easier to identify the root cause of issues.

Let’s illustrate the difference between API automation and UI automation using a simple scenario of testing a login functionality.

I am using two popular frameworks; Postman for API automation & Selenium for UI automation.

API Automation Example (Using Postman)

In this API automation example, we’re using Postman to send a POST request to a login API endpoint. We then assert that the HTTP status code is 200 and that the response contains a “success” attribute set to true.

// Postman test script for API automation
pm.test("Login API Test", function () {
// Define the API endpoint and request parameters
const url = "https://api.example.com/login";
const request = {
method: "POST",
url: url,
header: {
"Content-Type": "application/json",
},
body: {
mode: "raw",
raw: JSON.stringify({
username: "testuser",
password: "password123",
}),
},
};

// Send the API request
const response = pm.sendRequest(request);

// Perform assertions on the API response
pm.expect(response.code).to.equal(200); // Check HTTP status code
pm.expect(response.json().success).to.be.true; // Check a specific JSON attribute
});

UI Automation Example (Using Selenium and Python)

In this UI automation example, we’re using Selenium (with Python) to automate interactions with a web-based login page. We locate elements on the page, enter login credentials, click the login button, then verify the presence of a welcome message to confirm successful login.

from selenium import webdriver

# Set up the Selenium WebDriver for a web browser (e.g., Chrome)
driver = webdriver.Chrome()

# Open the login page
driver.get("https://www.example.com/login")

# Locate the username and password input fields and the login button
username_input = driver.find_element_by_id("username")
password_input = driver.find_element_by_id("password")
login_button = driver.find_element_by_id("login-button")

# Enter login credentials and submit the form
username_input.send_keys("testuser")
password_input.send_keys("password123")
login_button.click()

# Perform assertions on the UI to verify successful login
welcome_message = driver.find_element_by_css_selector(".welcome-message")
assert "Welcome, testuser!" in welcome_message.text

# Close the browser
driver.quit()

As you can see, API automation focuses on making HTTP requests and validating API responses, while UI automation interacts with the user interface of a web application.

Summary


Playwright Page Object Model FrameWork

 

Functional UI Automation Framework - Open Cart Website

This UI Automation framework repository has some basic functional tests for Open Cart UI Website. It covers the login, add products to cart and checkout cart functionality. 

This framework built with the following:

LanguageJava
Build ToolMaven
UI FrameworkPlaywright
Testing FrameworkTestNG
ReportingExtentReports
LoggingLog4j
Design PatternPage Object Model
CIGithub Actions
==================================================================
Design the Framework:

  • Use any IDE Tool(Eclipse/ Intellij/VsCode]   
  • Create a Maven Project. Then add below dependencies in POM.xml
------------------------------------------------------------------------------------------------------------------------
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>pom.playwright</groupId>
  <artifactId>opencart-ui-automation</artifactId>
  <packaging>jar</packaging>
  <version>1.0.0</version>
  <name>open-cart-ui-automation</name>
  <url>http://maven.apache.org</url>
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
  </properties>
  <dependencies>
    <dependency>
      <groupId>com.microsoft.playwright</groupId>
      <artifactId>playwright</artifactId>
      <version>1.26.0</version>
    </dependency>
    <dependency>
      <groupId>org.testng</groupId>
      <artifactId>testng</artifactId>
      <version>7.6.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.logging.log4j</groupId>
      <artifactId>log4j-api</artifactId>
      <version>2.19.0</version>
    </dependency>
    <dependency>
      <groupId>org.apache.logging.log4j</groupId>
      <artifactId>log4j-core</artifactId>
      <version>2.19.0</version>
    </dependency>
    <dependency>
      <groupId>com.aventstack</groupId>
      <artifactId>extentreports</artifactId>
      <version>5.0.9</version>
    </dependency>
    <dependency>
      <groupId>org.projectlombok</groupId>
      <artifactId>lombok</artifactId>
      <version>1.18.24</version>
      <optional>true</optional>
    </dependency>
  </dependencies>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>3.0.0-M7</version>
        <configuration>
          <suiteXmlFiles>
            <suiteXmlFile>./src/test/resources/testrunners/testng.xml</suiteXmlFile>
          </suiteXmlFiles>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-javadoc-plugin</artifactId>
        <version>3.4.1</version>
      </plugin>
    </plugins>
  </build>
</project>
==================================================================

Framework Design

  • PlaywrightFactory - Base class to create the playwright objects (Page, BrowserContext, Browser, Playwright)
    • Use the Test configuration to set up the browser and playwright Browser context with tracing, video recording, SessionState, View Port
    • This exposes only few public methods (createPage(), takeScreenshot, saveSessionState())
    • No ThreadLocal static variables used for the playwright objects instead all are encapsulated in this class, only Page object is returned, still it supports the parallel execution, This has been improved the framework design.
  • The pages package contains page objects and functional methods of each page
    • Login page - login page objects and login functional method
    • Home page - home page objects and add to cart functional method
    • Shopping Cart page - Cart page objects and checkout functionality method
  • The script can take screenshots of specific step and can save the SessionState using playwright feature.
  • Tried to reduce the static variables as much as possible. Only the methods, variables can be shared across all tests are created with static

Test Design

  • Each test in the tests package is independent and complete.
  • TestBase class uses the TestNG configuration annotations for set up and tear down.
    • @BeforeSuite : clean up results directory, Initialize the extent reports, logger and read test properties.
    • @AfterSuite : tear down method to write (flush) the results to extent reports and assert all the soft asserts.
    • @BeforeMethod : Start the playwright server, instantiate the page and navigate to the website.
    • @AfterMethod : Stop the tracing (if enabled), Take screenshots (if test not success) and close the page instance.
    • @BeforeClass : This method used in each Test class to create the ExtentTest for reporting.
  • For each Test new playwright server is launched which is isolated from other playwright instance.
  • The current test design supports the parallel execution of TestNG test, This has been achieved by reducing the scope of the variables and objects are used.
  • TestRunner with the test class configuration. More test runners can be added here and same should be updated in the pom.xml surefire plugin.
  • The test addMoreProductToCartAndCheckoutTest are designed to use the playwright feature Storage State. The previous login state is used in this test
------------------------------------------------------------------------------------------------------------------------
Create PlaywrightFactory class:
src/main/java/base/PlaywrightFactory.java
--------------------------------------------------------------------------------------------------------------
package base;

import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Base64;

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

import com.microsoft.playwright.Browser;
import com.microsoft.playwright.Browser.NewContextOptions;
import com.microsoft.playwright.BrowserContext;
import com.microsoft.playwright.BrowserType;
import com.microsoft.playwright.BrowserType.LaunchOptions;
import com.microsoft.playwright.Page;
import com.microsoft.playwright.Playwright;
import com.microsoft.playwright.Tracing;

import utils.TestProperties;

/**
 * The class PlaywrightFactory provides a constructor which starts the
 * playwright server.
 * It has private and public methods to create a playwright page.
 * 
 * @author Ramesh Ch
 */
public class PlaywrightFactory {

    private static Logger log = LogManager.getLogger();
    private Playwright playwright;
    private TestProperties testProperties;

    /**
     * Constructor to initialize the test properties and playwright server
     * 
     * @param testProperties - {@link TestProperties}
     */
    public PlaywrightFactory(TestProperties testProperties) {
        this.testProperties = testProperties;
        playwright = Playwright.create();
    }

    /**
     * Method is to get playwright {@link Browser} instance of browser property in
     * config file with headless mode property
     * 
     * @return Browser - Returns playwright {@link String} instance
     * @throws IllegalArgumentException - Throws Exception when no matching browser
     *                                  is available for property
     */
    private Browser getBrowser() throws IllegalArgumentException {
        String browserName = testProperties.getProperty("browser");
        boolean headless = Boolean.parseBoolean(testProperties.getProperty("headless"));
        LaunchOptions launchOptions = new BrowserType.LaunchOptions().setHeadless(headless);
        BrowserType browserType;
        switch (browserName.toLowerCase()) {
            case "chromium":
                browserType = playwright.chromium();
                break;
            case "firefox":
                browserType = playwright.firefox();
                break;
            case "safari":
                browserType = playwright.webkit();
                break;
            case "chrome":
                browserType = playwright.chromium();
                launchOptions.setChannel("chrome");
                break;
            case "edge":
                browserType = playwright.chromium();
                launchOptions.setChannel("msedge");
                break;
            default:
                String message = "Browser Name '" + browserName + "' specified in Invalid.";
                message += " Please specify one of the supported browsers [chromium, firefox, safari, chrome, edge].";
                log.debug(message);
                throw new IllegalArgumentException(message);
        }
        log.info("Browser Selected for Test Execution '{}' with headless mode as '{}'", browserName, headless);
        return browserType.launch(launchOptions);
    }

    /**
     * Method to get the playwright {@link BrowserContext} with the video recording,
     * tracing. storage context and view port
     * These properties are set based on values on config properties
     * 
     * @return BrowserContext - Returns playwright {@link BrowserContext} instance
     */
    private BrowserContext getBrowserContext() {
        BrowserContext browserContext;
        Browser browser = getBrowser();
        NewContextOptions newContextOptions = new Browser.NewContextOptions();

        if (Boolean.parseBoolean(testProperties.getProperty("enableRecordVideo"))) {
            Path path = Paths.get(testProperties.getProperty("recordVideoDirectory"));
            newContextOptions.setRecordVideoDir(path);
            log.info("Browser Context - Video Recording is enabled at location '{}'", path.toAbsolutePath());
        }

        int viewPortHeight = Integer.parseInt(testProperties.getProperty("viewPortHeight"));
        int viewPortWidth = Integer.parseInt(testProperties.getProperty("viewPortWidth"));
        newContextOptions.setViewportSize(viewPortWidth, viewPortHeight);
        log.info("Browser Context - Viewport Width '{}' and Height '{}'", viewPortWidth, viewPortHeight);

        if (Boolean.parseBoolean(testProperties.getProperty("useSessionState"))) {
            Path path = Paths.get(testProperties.getProperty("sessionState"));
            newContextOptions.setStorageStatePath(path);
            log.info("Browser Context - Used the Session Storage State at location '{}'", path.toAbsolutePath());
        }

        browserContext = (browser.newContext(newContextOptions));

        if (Boolean.parseBoolean(testProperties.getProperty("enableTracing"))) {
            browserContext.tracing().start(new Tracing.StartOptions().setScreenshots(true).setSnapshots(true));
            log.info("Browser Context - Tracing is enabled with Screenshots and Snapshots");
        }
        return browserContext;
    }

    /**
     * Method to create a new playwright {@link Page} for the browser
     * 
     * @return Page - Returns playwright {@link Page} instance or null if any
     *         exception occurs while retrieving {@link BrowserContext}
     */
    public Page createPage() {
        Page page = null;
        try {
            page = (getBrowserContext().newPage());
        } catch (Exception e) {
            log.error("Unable to create Page : ", e);
        }
        return page;
    }

    /**
     * Method to save the session state from the {@link BrowserContext} in a file
     * provided in 'sessionState' property
     * 
     * @param page     - playwright {@link Page} instance
     * @param filename - {@link String} name of the file to store session state
     */
    public static void saveSessionState(Page page, String filename) {
        page.context().storageState(new BrowserContext.StorageStateOptions()
                .setPath(Paths.get(filename)));
    }

    /**
     * Method to take screenshot of the {@link Page}
     * It saves the screenshots with file name of ${currentTimeMillis}.png
     * 
     * @param page - playwright {@link Page} instance
     * @return String - Returns encoded {@link Base64} String of image
     */
    public static String takeScreenshot(Page page) {
        String path = System.getProperty("user.dir") + "/test-results/screenshots/" + System.currentTimeMillis()
                + ".png";

        byte[] buffer = page.screenshot(new Page.ScreenshotOptions().setPath(Paths.get(path)).setFullPage(true));
        String base64Path = Base64.getEncoder().encodeToString(buffer);

        log.debug("Screenshot is taken and saved at the location  {}", path);
        return base64Path;
    }
}

------------------------------------------------------------------------------------------------------------------

Test Configuration

  • The test configuration such as URL, username, Base64 encoded password, flags to enable/disable the video recording , tracing and location to store test results and artifacts are provided in the config.properties file. This properties can be override by runtime properties if provided.
  • To read and update the properties, single instance of TestProperties class is used throughout the entire test execution. This has removed the usage of static variable of Properties class and passing of this variable across methods and classes.

Create a file under 
  • src
  • main
  • resources
  • config.properties


    # Page url
    url = https://naveenautomationlabs.com/opencart/

    # Browser - allowed values [chrome, chromium, edge, firefox, safari]
    browser = chrome

    # To Run test in headless mode
    headless = true

    # App credentails - username
    username = rameshqaonline@gmail.com

    # App credentails - Base64 encoded password
    password = Test@1234

    # Option to enable video recording of execution
    enableRecordVideo = false

    # Path to save video recording files
    recordVideoDirectory = test-results/scriptRecordVideos/

    # Option to enable tracing (Capture HAR, Screenshots, events) of execution
    enableTracing = false

    # Path to save tracing zip file
    tracingDirectory= test-results/traces/

    # To set View Port - Width
    viewPortWidth = 1280

    # To set View Port - Height
    viewPortHeight = 720

    # Test execution Extent report file name
    extentReportPath = test-results/TestExecutionReport.html

    # Option to enable session state storage 
    useSessionState = false

    # Session state storage file name
    sessionState = src/main/resources/session-state.json

    =====================================================

    Logger

    • log4j2 logging framework is used. Logs were printed to console as well as saved to the file. The log configuration file with log pattern, Appenders is available at src/main/resources/log4j2.xml.
    • Logger is designed to support parallel execution and the logs will be printed with the Thread Id.
    • =================================================
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration status="INFO">
    <Properties>
    <Property name="filename">test-results/logs/opencart-ui-automation.log</Property>
    <Property name="pattern">%style{[%date{yyyy-MMM-dd HH:mm:ss.SSS zzz}]}{yellow} %style{[Thread ID: %tid]}{white} %style{[%5class{1}.%method]}{bright_blue} %highlight{[%level]} %msg%n%throwable</Property>
    <Property name="reportLogPattern">%style{[%date{yyyy-MMM-dd HH:mm:ss.SSS zzz}]}{yellow} %style{[Thread ID: %tid]}{white} %style{[%logger]}{bright_blue} %highlight{[%level]} %msg%n%throwable</Property>
    </Properties>
    <ThresholdFilter level="DEBUG" />
    <Appenders>
    <Console name="STDOUT">
    <PatternLayout disableAnsi="false">
    <MarkerPatternSelector defaultPattern="${pattern}">
    <PatternMatch key="ReportLog" pattern="${reportLogPattern}"></PatternMatch>
    </MarkerPatternSelector>
    </PatternLayout>
    </Console>
    <File name="File" fileName="${filename}" append="false">
    <PatternLayout disableAnsi="false">
    <MarkerPatternSelector defaultPattern="${pattern}">
    <PatternMatch key="ReportLog" pattern="${reportLogPattern}"></PatternMatch>
    </MarkerPatternSelector>
    </PatternLayout>
    </File>
    </Appenders>

    <Loggers>
    <Root level="DEBUG">
    <AppenderRef ref="STDOUT" />
    <AppenderRef ref="File" />
    </Root>
    </Loggers>
    </Configuration>
    ==========================================================

    Reporting

    • Extent Spark reporter is used for test reports. Configuration (Theme, timestamp, report name, document title) for report is available at src/main/resources/extent-report-config.xml
    • Reports will be generated at the end of test execution. i.e, @AfterSuite
    • All test in a single class captured in single ExtentTest with multiple test modes
    • The system/environment variables in report are captured from the runtime/config properties.
    • For each test class (considered as scenario) one ExtentTest is created and for each test under the scenario class, the Extent testNode is created.
    =========================================================
    <?xml version="1.0" encoding="UTF-8"?>
    <extentreports>
      <configuration>

        <!-- report theme -->
        <!-- STANDARD, DARK -->
        <theme>STANDARD</theme>

        <!-- document encoding -->
        <!-- defaults to UTF-8 -->
        <encoding>UTF-8</encoding>

        <!-- protocol for script and stylesheets -->
        <!-- defaults to https -->
        <!-- HTTP, HTTPS -->
        <protocol>HTTPS</protocol>

        <!-- offline report -->
        <timelineEnabled>true</timelineEnabled>

        <!-- offline report -->
        <enableOfflineMode>false</enableOfflineMode>

        <!-- use thumbnails for base64 images -->
        <!-- this may slowdown viewing tests -->
        <thumbnailForBase64>false</thumbnailForBase64>

        <!-- title of the document -->
        <documentTitle>Open Cart UI Test Execution Report</documentTitle>

        <!-- report name - displayed at top-nav -->
        <reportName>Open Cart UI Test Execution Report</reportName>

        <!-- timestamp format -->
        <timeStampFormat>yyyy-MMM-dd HH:mm:ss</timeStampFormat>

        <!-- custom javascript -->
        <scripts>
          <![CDATA[
            $(document).ready(function() {
                
            });
          ]]>
        </scripts>

        <!-- custom styles -->
        <styles>
          <![CDATA[

          ]]>
        </styles>

      </configuration>
    </extentreports>

    ==========================================================
    Extent Report class:

    src/main/java/utils/ExtentReporter.java
    ----------------------------------------------------------------------------------------------------
    package utils;

    import java.io.IOException;
    import java.nio.file.Paths;

    import org.apache.logging.log4j.LogManager;
    import org.apache.logging.log4j.Logger;
    import org.apache.logging.log4j.Marker;
    import org.apache.logging.log4j.MarkerManager;

    import com.aventstack.extentreports.ExtentReports;
    import com.aventstack.extentreports.ExtentTest;
    import com.aventstack.extentreports.Status;
    import com.aventstack.extentreports.reporter.ExtentSparkReporter;

    /**
     * Extent Report class for the Report generation
     * @author Ramesh Ch
     */
    public class ExtentReporter {

        private ExtentReporter() {
            throw new IllegalStateException("Extent Reporter class instantiation is not allowed");
        }

        /**
         * Method to configure and get the ExtentReporter instance
         * 
         * @param testProperties - {@link TestProperties}
         * @return ExtentReports - Returns {@link ExtentReports} instance
         * @throws IOException - Throws {@link IOException}
         */
        public static ExtentReports getExtentReporter(TestProperties testProperties) throws IOException {
            ExtentSparkReporter reporter = new ExtentSparkReporter(testProperties.getProperty("extentReportPath"));
            reporter.loadXMLConfig("./src/main/resources/extent-report-config.xml");

            reporter.config().setCss("img.r-img { width: 30%; }");
            ExtentReports extentReports = new ExtentReports();
            extentReports.attachReporter(reporter);

            String applicationURL = "<a href=\"" + testProperties.getProperty("url")
                    + "\" target=\"_blank\">Open cart Demo Application</a>";
            extentReports.setSystemInfo("Application", applicationURL);

            extentReports.setSystemInfo("OS", System.getProperties().getProperty("os.name"));
            extentReports.setSystemInfo("Browser", testProperties.getProperty("browser"));

            if (Boolean.getBoolean(testProperties.getProperty("enableRecordVideo"))) {
                String filePath = Paths.get(testProperties.getProperty("recordVideoDirectory")).toAbsolutePath()
                        .toString();
                String recordedVideoFilePath = "<a href=\"" + filePath
                        + "\" target=\"_blank\"Open cart Demo Application</a>";
                extentReports.setSystemInfo("Execution Recorded Video", recordedVideoFilePath);
            }
            return extentReports;
        }

        /**
         * Method to add the log the step to extent report
         * 
         * @param extentTest - {@link ExtentTest}
         * @param status     - {@link Status}
         * @param message    - {@link String} log message
         */
        public static void extentLog(ExtentTest extentTest, Status status, String message) {
            extentTest.log(status, message);
            log(status, message);
        }

        /**
         * Method to add the log step with the screenshot to the extent report
         * 
         * @param extentTest - {@link ExtentTest}
         * @param status     - {@link Status}
         * @param message    - {@link String} log message
         * @param base64Path - {@link java.util.Base64} {@link String} of screenshot
         */
        public static void extentLogWithScreenshot(ExtentTest extentTest, Status status, String message,
                String base64Path) {
            String imageElement = "<br/><img class='r-img' src='data:image/png;base64," + base64Path
                    + "' href='data:image/png;base64," + base64Path + "'data-featherlight='image'>";
            extentTest.log(status, message + imageElement);
            log(status, message);
        }

        /**
         * Method to log the message to console and log file.
         * It removes any HTML element in the message before printing logging
         * 
         * @param status  - {@link Status}
         * @param message - {@link String} log message
         */
        private static void log(Status status, String message) {
            message = message.replaceAll("\\<.*?\\>", "");
            Logger log = LogManager.getLogger(Thread.currentThread().getStackTrace()[3].getClassName().split("\\.")[1]+ "." + Thread.currentThread().getStackTrace()[3].getMethodName());
            Marker marker = MarkerManager.getMarker("ReportLog");
            switch (status) {
                case FAIL:
                    log.warn(marker, message);
                    break;
                case WARNING:
                    log.warn(marker, message);
                    break;
                case SKIP:
                    log.warn(marker, message);
                    break;
                case INFO:
                    log.info(marker, message);
                    break;
                default:
                    log.debug(marker, message);
                    break;
            }
        }

    }


    ---------------------------------------------------------------------------------------------
    Reading Properties:

    src/main/java/utils/TestProperties.java
    ----------------------------------------------------------------------------------------------
    package utils;

    import java.io.FileInputStream;
    import java.io.IOException;
    import java.util.Properties;

    import org.apache.logging.log4j.LogManager;
    import org.apache.logging.log4j.Logger;

    /**
     * Class to load, get, set and update the test properties
     * @author  Ramesh Ch
     */
    public class TestProperties {

        private Logger log = LogManager.getLogger();
        private Properties prop;

        /**
         * Constructor - Load the config properties for Test
         */
        public TestProperties() {
            this.prop = new Properties();
            try (FileInputStream fileInputStream = new FileInputStream("./src/main/resources/config.properties")) {
                prop.load(fileInputStream);
            } catch (IOException e) {
                log.error("Error while reading properties file ", e);
            }
        }

        /**
         * Utility method to get the property value after trim
         * 
         * @param key - {@link String} - key of the property
         * @return String - Returns value {@link String} of the property
         */
        public String getProperty(String key) {
            return prop.getProperty(key) != null ? prop.getProperty(key).trim() : null;
        }

        /**
         * Method to set the property with the value
         * 
         * @param key   - {@link String} - key of the property
         * @param value - {@link String} - value of the property
         */
        public void setProperty(String key, String value) {
            prop.setProperty(key, value);
        }

        /**
         * Method to update the Test properties with Run time System Property
         * 
         */
        public void updateTestProperties() {
            prop.keySet().forEach(key -> {
                String propKey = (String) key;
                if (System.getProperty(propKey) != null)
                    prop.setProperty(propKey, System.getProperty(propKey));
            });
        }
    }


    ---------------------------------------------------------------------------------------------------

    Test Retry Capability:

    src/test/java/testUtils/RetryAnalyzer.java

    package testutils;

    import org.testng.IRetryAnalyzer;
    import org.testng.ITestResult;

    public class RetryAnalyzer implements IRetryAnalyzer {

        private int retryCount = 1;
        private int maxRetryCount = 2;

        @Override
        public boolean retry(ITestResult iTestResult) {

            if (retryCount <= maxRetryCount) {
                // Add custom attribute for retry
                iTestResult.getTestContext().setAttribute("retryCount", retryCount);
                retryCount++;
                iTestResult.setStatus(ITestResult.FAILURE);
                return true;
            } else {
                iTestResult.setStatus(ITestResult.FAILURE);
            }
            return false;
        }
    }

    Test Execution

    • The tests can be executed from maven test command or individual TestNG test from local after cloning the repo.
    • sample maven commands
         mvn clean test                          (OR)
      
         mvn clean test -DProperty=value         (OR)
      
         SELENIUM_REMOTE_URL="http://localhost:4444/wd/hub" mvn clean test
    ====================================================================

    Running Playwright Tests via Azure DevOps Pipeline

    This story explains the end-to-end process of executing Playwright tests (hosted on GitHub) via Azure DevOps Pipeline. This guide will provide step-by-step instructions on setting up a GitHub repository, installing Playwright, configuring an Azure DevOps pipeline, and executing Playwright tests as part of a CI/CD workflow.

    Overview

    Playwright is a powerful tool for automating browser interactions, and Azure DevOps Pipelines provide a robust CI/CD platform for automating build, test, and deployment processes. By integrating Playwright tests into Azure DevOps Pipelines, teams can ensure that their web applications are thoroughly tested before deployment, reducing the risk of bugs and regressions reaching production.

    Prerequisites

    Before you begin, ensure you have the following prerequisites:

    • GitHub account: You will need a GitHub account to create a new repository and push your Playwright project to it. If you don’t have a GitHub account, you can create one for free at github.com.
    • Azure DevOps account: An Azure DevOps account is required to create and configure the CI/CD pipeline for your Playwright tests. If you don’t have an Azure DevOps account, you can sign up for one at dev.azure.com.
    • Git installed on your local machine: Git is a version control system that is used to manage your code changes. If you haven’t installed Git yet, you can download it from git-scm.com.
    • Visual Studio Code (VS Code) installed on your local machine: VS Code is a lightweight but powerful source code editor. It is available for Windows, macOS, and Linux. You can download it from code.visualstudio.com.

    Step 1: Create new GitHub Project

    1. Login to GitHub, create a new GitHub project repository and clone it to your local machine.

    Step 2: Add .gitignore file

    1. On your local machine open the cloned GitHub repo in Visual Studio Code (VSCode).
    2. Create a .gitignore file and add the following directories:
    node_modules/
    test-results/
    tests-examples/
    playwright-report/
    playwright/.cache/

    Step 3: Install Playwright

    1. In the Terminal window in VSCode, install Playwright using the command:
      npm init playwright@latest
      during installation select your preferred language TypeScript or JavaScript (example in this guide are shown in JavaScript) and accept other prompts with default selections.
    2. After the installation, Playwright generates example tests including a test file named tests/example.spec.js that contains two basic tests navigating to the Playwright Home Page URL and doing validations.
    3. Run the tests with the command npx playwright test
    // example.spec.js 
    const { test, expect } = require('@playwright/test');

    test('has title', async ({ page }) => {

    await page.goto('https://playwright.dev/');

    // Expect a title "to contain" a substring.
    await expect(page).toHaveTitle(/Playwright/);
    });

    test('get started link', async ({ page }) => {
    await page.goto('https://playwright.dev/');
    // Click the get started link.

    await page.getByRole('link', { name: 'Get started' }).click();

    // Expects page to have a heading with the name of Installation.
    await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
    });

    Step 4: Update the Playwright Config file

    1. Open the playwright.config.js file in VSCode and add the following:
    // @ts-check
    const { defineConfig, devices } = require('@playwright/test');

    /**
    * @see https://playwright.dev/docs/test-configuration
    */

    module.exports = defineConfig({
    testDir: './tests',
    /* Run tests in files in parallel */
    fullyParallel: true,
    /* Fail the build on CI if you accidentally left test.only in the source code. */
    forbidOnly: !!process.env.CI,
    /* Retry on CI only */
    retries: process.env.CI ? 2 : 0,
    /* Opt out of parallel tests on CI. */
    workers: process.env.CI ? 1 : undefined,

    /* Reporter to use. See https://playwright.dev/docs/test-reporters
    Set to:'never', change to 'always' to launch report automatically after execution */

    reporter: [
    ['html', { open: 'never' }],
    ['junit', {outputFile: 'results.xml'}] //required for Azure DevOps Pipeline
    ],


    use: {
    /* Maximum time each action such as `click()` can take.
    Defaults to 0 (no limit). */

    actionTimeout: 60 * 1000,
    navigationTimeout: 30 * 1000,

    /* Collect trace when retrying the failed test.
    See https://playwright.dev/docs/trace-viewer */

    trace: 'on',
    screenshot: 'only-on-failure',
    video: {
    mode: 'on'
    },
    headless: true,
    viewport: { width: 1900, height: 940 },
    launchOptions: {
    slowMo: 500,
    },
    },

    /* Configure projects for major browsers */
    projects: [
    {
    name: 'chromium',
    use: { ...devices['Desktop Chrome'] },
    },

    {
    name: 'firefox',
    use: { ...devices['Desktop Firefox'] },
    },

    {
    name: 'webkit',
    use: { ...devices['Desktop Safari'] },
    },

    ],

    });

    Step 5: Run Playwright Tests

    1. Run Playwright test using npx playwright test command
    2. Optionally update package.json file with the following to run Playwright tests with the shortened test execution command npm test
    "scripts": {
    "test": "npx playwright test --workers=1"
    },

    Step 6: Push updated Project to GitHub

    1. After executing your tests and confirming the tests run successfully push your repo to GitHub.
    2. Push all he updates to the project we’ve done so far to GitHub:
      git add .
      git commit -m "Add Playwright Tests"
      git push

    At this point our project files are pushed to remote repository but we don’t yet have a pipeline that would execute the tests we just uploaded.

    Next we’re going to configure Azure DevOps project and create the pipeline to run the tests via Azure DevOps.

    Step 7: Creating Azure DevOps Organization and Project

    1. Create a new Azure DevOps organization:
    • Go to dev.azure.com and sign in with your Azure DevOps account.
    • Click on your profile icon in the top right corner and select “Organizations.”
    • Click on the “New organization” link.
    • Enter a name for your organization and follow the prompts to create it.

    2. Create a new project in the organization:

    • Once you’ve created the organization, click on the “New project” button on the organization’s home page.
    • Enter a name for your project, select the visibility (private), and choose a version control system (Git).
    • Click “Create” to create the project.

    3. Select your new project and navigate to Pipelines

    Step 8: Creating pipeline YML files

    Before we can create a pipeline in Azure DevOps, we need to add YAML files to our GitHub Playwright project and create instructions for running the pipeline. We are going to add two files: playwright-template.yml and playwright-automation.yml. Here's how you can do it:

    Adding YAML Files for Azure DevOps CI/CD Pipeline

    Create a azure-pipelines directory in your Playwright project:

    • In your Visual Studio Code editor, navigate to the root of your Playwright project.
    • Create a new directory named azure-pipelines.

    Add playwright-template.yml and playwright-automation.yml files inside the azure-pipelines directory:

    Edit the playwright-template.yml file:

    • Open the playwright-template.yml file in Visual Studio Code.
    • Add the the following YAML code to define the template for your Playwright tests. This file will contain the common steps and configurations that will be reused across different test scenarios.
    • This pipeline is designed to run Playwright tests in an Azure DevOps pipeline.
    • It defines a parameter BASE_URL that allows you to specify the base URL for your tests.
    • The pipeline consists of a single job named test that contains several steps:
    1. Use Node version 16: Sets the Node version to 16.x for the pipeline.
    2. NPM Install: Installs the project dependencies using npm ci.
    3. Playwright Install: Installs Playwright and its dependencies.
    4. Run Playwright Tests: Sets the BASE_URL parameter, configures Playwright to run in CI mode, specifies the output format for JUnit, and runs the Playwright tests. This step continues even if there are errors (continueOnError: true).
    5. Add playwright-report to Archive: Archives the playwright-report folder to a zip file.
    6. Add test-results to Archive: Archives the test-results folder to a zip file.
    7. Publish Pipeline Artifacts: Publishes the archived files as pipeline artifacts.
    8. Publish Test Results: Publishes the test results in JUnit format with the title “Playwright ADO Demo — $(System.StageName)”.
    parameters:
    - name: BASE_URL
    type: string

    jobs:

    - job: test
    displayName: Run Playwright Tests
    steps:

    - download: none

    - checkout: self

    - task: NodeTool@0
    displayName: 'Use Node version 16'
    inputs:
    versionSpec: 16.x

    - script: |
    npm ci
    displayName: "NPM Install"

    - script: |
    npx playwright install --with-deps
    displayName: "Playwright Install"

    - script: |
    set BASE_URL=${{ parameters.BASE_URL }}
    set CI=true
    set PLAYWRIGHT_JUNIT_OUTPUT_NAME=results.xml
    npx playwright test
    displayName: "Run Playwright Tests"
    continueOnError: true

    - task: ArchiveFiles@2
    displayName: 'Add playwright-report to Archive'
    inputs:
    rootFolderOrFile: '$(Pipeline.Workspace)/s/playwright-report/'
    archiveFile: '$(Agent.TempDirectory)/$(Build.BuildId)_$(System.JobAttempt)$(System.StageAttempt).zip'

    - task: ArchiveFiles@2
    displayName: 'Add test-results to Archive'
    inputs:
    rootFolderOrFile: '$(Pipeline.Workspace)/s/test-results/'
    archiveFile: '$(Agent.TempDirectory)/$(Build.BuildId)_$(System.JobAttempt)$(System.StageAttempt).zip'
    replaceExistingArchive: false

    - task: PublishPipelineArtifact@1
    displayName: 'Publish Pipeline Artifacts'
    inputs:
    targetPath: '$(Agent.TempDirectory)/$(Build.BuildId)_$(System.JobAttempt)$(System.StageAttempt).zip'
    artifact: pipeline-artifacts

    - task: PublishTestResults@2
    inputs:
    testResultsFormat: 'JUnit'
    testResultsFiles: '$(Pipeline.Workspace)/s/results.xml'
    testRunTitle: 'Playwright ADO Demo - $(System.StageName)'
    displayName: 'Publish Test Results'

    Edit the playwright-automation.yml file:

    • Open the playwright-automation.yml file in Visual Studio Code.
    • Add the necessary YAML code to define the pipeline for running your Playwright tests. This file will include steps to install dependencies, build the project, and run the Playwright tests.
    pool:
    vmImage: 'windows-latest'

    trigger:
    branches:
    include:
    - 'main'

    name: $(Build.BuildId)

    stages:

    - stage: qa
    displayName: 'Run Automation Test - QA'
    dependsOn: []
    jobs:
    - template: playwright_template.yml
    parameters:
    BASE_URL: ''

    Creating a Pipeline in Azure DevOps

    Now we can create a new pipeline in Azure DevOps, select a Git repository, and choose the existing YAML files option:

    Sign in to Azure DevOps:

    • Go to dev.azure.com and sign in with your Azure DevOps account.

    Navigate to your project:

    • Select the organization and project where you want to create the pipeline.

    Create a new pipeline:

    • Click on “Pipelines” in the left sidebar.
    • Click on the “Create Pipeline” button.

    Select your repository:

    • In Where is your code?’ screen select GitHub (YAML) option
    • Choose the Git repository where your Playwright project is located.
    • If you haven’t connected your repository yet, click on “Connect” and follow the prompts to connect to your Git repository.

    Configure your pipeline:

    • In the “Configure your pipeline” step, select “Existing Azure Pipelines YAML file” as the pipeline configuration method.
    • Click on the “YAML file path” field and select the playwright-automation.yml file from the azure-pipelines directory in your repository.
    • Click “Continue” to proceed.
    • Review the pipeline configuration to ensure it matches your requirements.
    • Click on “Run” to save the pipeline configuration and run the pipeline.

    Monitor the pipeline execution:

    • Once the pipeline is running, you can monitor its progress in the Azure DevOps interface.
    • The pipeline will automatically run the tasks defined in your playwright-automation.yml file, such as installing dependencies, building the project, and running the Playwright tests.

    View the results:

    • After the pipeline has completed, you can view the results of the Playwright tests in the Azure DevOps interface.
    • Any test failures or other issues will be reported in the pipeline logs.
    • You can download execution results including traces, video files, screenshots, etc and view them on your local machine
    • NOTE: Your Playwright tests are now set up to run automatically in Azure DevOps whenever you push changes to your Git repository.

    Adding Pipeline Parameters

    Adding pipeline parameters in Azure DevOps can be beneficial for several reasons.

    • It allows for greater flexibility and reusability of pipelines.
    • By defining parameters, you can customize the behavior of your pipeline based on different scenarios, such as deploying to different environments or running different sets of tests.
    • This flexibility reduces the need to create multiple similar pipelines, simplifying your pipeline configuration and maintenance.

    To add a pipeline parameters to our example we going to modify project files as follows:

    1. Update Playwright tests in example.spec.js to use a global variable for website URL. Optionally, add a log statement to see how the global variable value is used in the test.
    // @ts-check
    const { test, expect } = require('@playwright/test');

    test('has title', async ({ page }) => {
    await page.goto(global.BASE_URL);

    console.log('Test 1 [has title] : Global BASE_URL variable: ', global.BASE_URL)

    // Expect a title "to contain" a substring.
    await expect(page).toHaveTitle(/Playwright/);
    });

    test('get started link', async ({ page }) => {
    await page.goto(global.BASE_URL);

    console.log('Test 2 [get started link]: Global BASE_URL variable: ', global.BASE_URL)

    // Click the get started link.
    await page.getByRole('link', { name: 'Get started' }).click();

    // Expects page to have a heading with the name of Installation.
    await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
    });

    2. Update playwright.config.js to reference global variable

    const { defineConfig, devices } = require('@playwright/test');

    Object.assign(global, {
    BASE_URL: process.env.BASE_URL,
    });

    //... the rest of the playwright.config.js file

    3. Update playwright-automation.yml with the BASE_URL value:

    pool:
    vmImage: 'windows-latest'

    trigger:
    branches:
    include:
    - 'main'

    name: $(Build.BuildId)

    stages:

    - stage: qa
    displayName: 'Run Automation Test - QA'
    dependsOn: []
    jobs:
    - template: playwright_template.yml
    parameters:
    BASE_URL: 'https://playwright.dev/'

    4. Confirm playwright-template.yml file has BASE_URL parameter referenced as follows:

    parameters:
    - name: BASE_URL

    ...

    - script: |
    set BASE_URL=${{ parameters.BASE_URL }}

    ...

    5. Push your updates to GitHub which will trigger automatic pipeline execution:

    6. View Execution Results



    ==========================================================

    Run Playwright Tests via GitLab CI/CD Pipeline using project-level variables, saving results to artifacts.

    • creating and cloning a GitLab repository
    • installing Playwright and generating basic tests
    • running Playwright tests via GitLab CI/CD pipeline
    • using GitLab project-level variables
    • publishing Playwright test results as pipeline artefacts
    • viewing the artifacts via GitLab pages.
    • Let’s go!

    Step 1: Create new GitLab Project

    1. GitLab: Create a new GitLab project repository and clone it to your local machine.

    Step 2: Install Playwright

    1. On your local machine open the cloned GitLab repo in Visual Studio Code (VSCode).
    2. Create a .gitignore file and add the following directories:
    node_modules/
    test-results/
    tests-examples/
    playwright-report/
    playwright/.cache/

    Step 3: Install Playwright

    1. In the Terminal window in VSCode, install Playwright using the command:
      npm init playwright@latest
      during installation select your preferred language TypeScript or JavaScript (example in this guide are shown in JavaScript) and accept other prompts with default selections.
    2. After the installation, Playwright generates example tests including a test file named tests/example.spec.js that contains two basic tests navigating to the Playwright Home Page URL and doing validations.
    3. Run the tests with the command npx playwright test
    // example.spec.js 

    const { test, expect } = require('@playwright/test');

    test('has title', async ({ page }) => {
    await page.goto('https://playwright.dev/');
    // Expect a title "to contain" a substring.
    await expect(page).toHaveTitle(/Playwright/);
    });

    test('get started link', async ({ page }) => {
    await page.goto('https://playwright.dev/');
    // Click the get started link.
    await page.getByRole('link', { name: 'Get started' }).click();
    // Expects page to have a heading with the name of Installation.
    await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
    });

    Step 4: Update the Playwright Config

    First we’re going to replace hardcoded URL in our code with a reference to a global variable we will create in the Playwright Config file.

    1. Open the playwright.config.js file in VSCode and add the following
    Object.assign(global, {
    BASE_URL: 'https://playwright.dev/',
    });

    2. Optionally add this to playwright.config.js to always capture trace and generate video of the execution

      use: {
    trace: "on" /* Collect trace always. See https://playwright.dev/docs/trace-viewer */,
    video: "on" /* Record Video */,
    screenshot: "only-on-failure",
    headless: true,
    viewport: { width: 1900, height: 940 },
    launchOptions: {
    slowMo: 500,
    },
    },

    Step 5: Update the Playwright Test

    1. Open the example.spec.js file in VSCode.
    2. Replace the hardcoded URL https://playwright.dev/ with global.BASE_URL. This allows the test to navigate to the URL defined in the globalBASE_URL variable in Playwright Config.
    3. Here’s modified example.spec.js file
    4. Run the tests with the command npx playwright test to confirm the update.
    // example.spec.js 

    const { test, expect } = require('@playwright/test');

    test('has title', async ({ page }) => {

    await page.goto(global.BASE_URL);

    // Expect a title "to contain" a substring.
    await expect(page).toHaveTitle(/Playwright/);
    });

    test('get started link', async ({ page }) => {

    await page.goto(global.BASE_URL);

    // Click the get started link.
    await page.getByRole('link', { name: 'Get started' }).click();

    // Expects page to have a heading with the name of Installation.
    await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
    });

    Step 6: Push updated Project to GitLab

    1. Push all he updates to the project we’ve done so far to GitLab:
      git add .
      git commit -m "Add Playwright Tests"
      git push

    At this point our project files are pushed to remote repository but we don’t yet have a pipeline that would execute the tests we just uploaded.

    Next we’re going to configure the pipeline to run the tests and publish the results and create a global variable for BASE_URL in GitLab project settings.

    Step 7: Define a Project Level Variable in GitLab

    1. Go to your remote project’s Settings in GitLab.
    2. Navigate to Settings > CI/CD section.
    3. Expand the Variables section.
    4. Click on “Add Variable”.
    5. In the Key field, enter BASE_URL.
    6. In the Value field, enter the URL, i.e. https://playwright.dev/
    7. Click on “Add Variable” to save it.

    Step 8: Run the Test via GitLab CI/CD Pipeline

    1. In VSCode, create a .gitlab-ci.yml file in your local project root.
    2. Define a job that runs the Playwright test and publishes run results folder playwright-report as artifacts with 2 days expiration period.
    3. We are also adding a shortcut link to view execution results (saves 4 clicks) right from pipeline execution log.
    # Variables
    variables:
    BASE_URL: $BASE_URL # variable defined in GitLab Project CI/CD Settings

    # Define the stages for your pipeline
    stages:
    - test

    # Define the job for running Playwright tests
    run_playwright_tests:
    stage: test
    image: mcr.microsoft.com/playwright:v1.39.0-jammy
    script:
    - npm ci # Install project dependencies
    - npx playwright test # Run your Playwright tests
    - echo "https://$CI_PROJECT_NAMESPACE.gitlab.io/-/$CI_PROJECT_NAME/-/jobs/$CI_JOB_ID/artifacts/playwright-report/index.html" # print the URL to view the results
    allow_failure: true

    # Publish Playwright test results as artifacts and keep for 2 days
    artifacts:
    when: always
    paths:
    - playwright-report
    expire_in: 2 days

    Update Playwright Config file to reference a GitLab project variable

    Object.assign(global, {
    BASE_URL: process.env.BASE_URL,
    });

    Commit and push the .gitlab-ci.yml file to the remote repo in GitLab with the commands:

    git add .
    git commit -m "Add GitLab CI/CD configuration"
    git push

    The pipeline should start automatically in the remote project in GitLab and run your test.

    View the results of the pipeline execution in Build > Jobs menu

    Now your pipeline in GitLab runs the automated script which takes the BASE_URL variable value from the project settings. This setup allows you to easily change the test URL without modifying the test script as well as adding other project level variables such as usernames and passwords

    View result by clicking the link from the pipeline execution log:

    Pipeline Job run log


    Parting Notes

    1. Now that your local project is updated to use the BASE_URL variable from GitLab project setting how can you still run your tests locally? One way is to create a local System Environment variable BASE_URL and populate its value and you’ll still be able to run your tests locally.

    2. Shorten the test execution command to npm test instead of npx playwright test. Update the scripts section in package.json to run the scripts with the following:

     "scripts": {
    "test": "npx playwright test --workers=1"
    },

    Clone this project to try it out

    This project can be found on GitLab at:

    Resources and References

    1. Playwright Documentation for GitLab CI https://playwright.dev/docs/ci#gitlab-ci
    2. GitLab Playwright automation templates: https://gitlab.com/automation4175606/playwright-tests
    3. Playwright and Gitlab CI: how to run your E2E tests: https://medium.com/@jeremie.fleurant/playwright-and-gitlab-ci-how-to-run-your-e2e-tests-42a51fd3e54e