Skip to main content

Using Dockerised Testcafe Scripts to measure the Reliability of Applications

Every industry solution we see today defines a certain level of user experience. The most appropriate way of measuring the user experience of an application is to feel its functionalities from an external point of view, by mimicking the actual user's behaviour. This is why we employ Quality Assurance Engineers to make sure that the application is tested from a "Real User's" perspective to see whether the User Experience is as expected, before releasing the application into production. UI (User Interface) Automation seems like the primary responsibility of a QA Engineer, not a Software Engineer. That may be true because UI automation scripts have helped QA Engineers to test the application as a real user, look for bugs and unexpected behaviour. But, I would like to show another aspect of UI Automation scripts from a developer's perspective, where continuous UI automation can actually tell us whether a Production application is working as expected in real time.

When QA Engineers use UI automation, it's more of a "readiness" check before pushing new features into production. But when a Site Reliability Engineer uses such an approach, that is to ensure the application is running reliably and providing the expected user experience to all its users, at all times. In one of my previous articles, I explained to you the role of a Site Reliability Engineer (SRE).

In this article, I would like to show you how an SRE can leverage UI Automation Scripts to ensure that an application is running reliably at all times. I will also be showing you how to run these UI Automation scripts in a Dockerised environment. (Something that I was struggling to do, for quite some time).

This article will be under following sections: 

  1. How UI Automation can be used to measure the reliability of applications

  2. Writing a simple UI Automation Script via Testcafe

  3. Running a UI Automation script on a GUI vs Headless

  4. Dockerising your UI Automation Script


Using UI Automation to Measure the Reliability of Applications


There are multiple ways to measure the reliability of a production application. This can be defined in different ways, based on the deployment structure, purpose of the application, and even the type of expected users. Some of the common ways of measuring the reliability of applications are given below.

  • Pinging app URLs to see the liveliness

  • The ratio of success to failure API calls from the application to the backends

  • Browser Javascript errors

  • Load time of Pages

  • Injecting Chaos and verifying that applications can function in various conditions like Network downtime etc.

Nevertheless, every application has its own important pages and user flows, that require careful attention to ensure that applications work reliably and they provide the expected User Experience (UX). Aspects like whether the user can successfully log in, or whether the user can successfully add items to a cart and checkout, are user flows that are crucial, which are also highly dependent on many other components of the application ecosystem. Yes, it's true that QA Engineers verify these before deployment, but what if a certain dependency breaks in a real production environment? There will be no other way to verify this unless someone files a support ticket.

This is where Site Reliability Engineers, mimic the user's behaviour in these applications, as a periodic job, which will check whether the major flows function as expected. This is where UI Automation can be leveraged to automatically and periodically measure the reliability of certain parts of the application. Sometimes, with this approach, you will notice that the behavior changes with user load, network latencies, backend failures, etc. Even though with synthetic data, this ensures that certain aspects run as expected from a user's perspective. Furthermore, to enhance this, you can also run these scripts from different geographies, so that you can notice differences based on various parts of the world.

Once you get the initial scripts and the structure ready, all you have to do is run these scripts in different environments which will give the required metrics at the end of the process.

One difficult decision here would be to figure out the major user flows which need to be monitored. Due to practical reasons, all the user flows in an application cannot be monitored. For the purpose of reliability, you only need to measure the major user flows. To make this decision, you will have to speak to different stakeholders and identify the most important use cases or user flows of the applications.


Writing a simple UI automation script using Testcafe


Testcafe is an Application UI Automation framework, which runs on all popular environments such as Windows, macOS, and Linux. It supports desktop, mobile, remote, and cloud browsers (UI or headless). Another good thing about it is, it's completely open-source. 

In this section, let's write a simple UI automation script that will run both on your browser in a GUI and without a GUI (headless).

Please follow the steps given in the below URL, to run your first Testcafe script. Afterward we will build a simple script and run it on our own.

The following script is a simple script that was taken from the above URL. You can write your own script for your application. Explaining this would be beyond the scope of this article, but if you are really interested the documentation provided by Testcafe is quite efficient.


import { Selector } from 'testcafe'; 
fixture `Getting Started` 
     .page ``; 

test('My first test', async t => { 
      console.log("Starting to type the developer name") 
      await t 
           .typeText('#developer-name', 'Keet Malin') 
console.log("Hitting th SUBMIT button after entering the developer name") 

await t 

console.log("Waiting for the article header to be Thank you, Keet Malin!") 

// Use the assertion to check if the actual header text is equal to the expected one 

   await t.expect(Selector('#article-header').innerText).eql('Thank you, Keet Malin!'); 


To run the above script via Chrome, you can simply use the following command. It will bring up a new Google Chrome browser and show you step by step how it happens. I am using Google Chrome as a simple example in this. You can use any browser you want. All supported browsers are given here.


testcafe chrome script.js


This approach is preferred if you really want to see how the test simulation is working. For example, an engineer would be interested in seeing the flow of events.

Now let us talk about running this in the headless mode. Headless mode will not open up a browser, but it will run in a browser and provide you a result. That means you can simply run this in a place where there is no GUI, like a terminal or SSH, and it will give you the same result.


testcafe chrome:headless script.js


This approach is preferred when you do not want to see the actual simulation, but you want to see the end result. This is preferred when you want to run periodic jobs, without the user's intervention. In this article, I would be bringing forward the Headless approach, so we can dockerise and run it in a containerized environment.


For both the above options, you will see a response like this.


Dockerized Headless Testcafe Automation Scripts-1





Dockerizing your Testcafe Execution


Now that you have the script ready, it's about running this in a Docker container. Fortunately, Testcafe provides you with a Docker Image with Chrome and Firefox installed. 


The following article explains the features of the Docker Image.

TestCafe provides a preconfigured Docker image with Chromium and Firefox installed.


But, let us create our own Dockerfile with this image and prepare it to be run.


FROM testcafe/testcafe
COPY . ./
CMD [ "chromium:headless", "script.js" ]


Now run the following command from the same directory as the Dockerfile and build the docker image.


docker build -t testcafe .


Now, run the docker image using the following command.


docker run testcafe


Now that the docker container is ready, you can simply run this as a Kubernetes Cron Job periodically, to make sure that your UI flow is working as expected.

Make sure that you have a proper reporting mechanism applied during failures. I used Slack alerts for this purpose. I had to write the code from scratch, and add it into the docker container.



Key Takeaways


Testcafe is an application automation framework. The role of a QA Engineer is to ensure that the quality of the application is tested before it is sent to production. Ensuring the reliability of the application throughout is actually the role of a Site Reliability Engineer (SRE).

Another point to make note of is, that this is all synthetic data. It might not represent an actual user's problems. But this can represent problems in general that occur due to the Application's ecosystem.

In this article, I just explained how to use this to validate some of the major flows. With this synthetic monitoring, you can also measure the page load times, gather a lot of data on latencies between each page such as latencies to load data, and even latencies of components that take time to render on the browser. With this, you will be able to define Service Level Objectives (SLOs) for applications. (Defining SLOs for applications is a separate topic and will not be discussed here.)

The reason for me to highlight the word docker is, because it can be easily run in Kubernetes, as a corn job to periodically monitor the user flows. As an SRE, you can also think of other ways to run this, but it all depends on the infrastructure used in your organisation.


A simple cronjob manifest for Kubernetes is given below.


apiVersion: batch/v1
kind: Job
  name: simple-test
  backoffLimit: 3
        - image: <IMAGE_NAME>
          name: simple-test
      restartPolicy: Never


You can also see the successful completion of a job.


Dockerized Headless Testcafe Automation Scripts-2



Hope this was useful to you. You can read my other blog posts here



© 2021 Creative Software. All Rights Reserved | Privacy | Terms of Use