Skip to main content

QA Guide Rails

Introduction

The QA Team's mission is to drive quality into all software products produced by/or on behalf of CRUK.  The QA Team will follow a programme of continuous Test Improvement to ensure long term, measurable improvements aligned to their business, technology and testing requirements.

This document describes a set of guide rails for Quality Assurance these are based on the existing best practices for testing at Cancer Research UK.


Quality Assurance Approach and Process

When & how to engage with the QA Team

This guide rail defines the approach and process to be followed by all teams at the start of projects involving new, legacy and 3rd Party CRUK products/tools.

The QA Team is striving to enhance CRUK's testing service throughout the project lifecycle and ultimately assist technology in improving the quality of product we deliver to our supporters, whilst pushing defect discovery forward in the lifecycle.

Diagram showing the various responsibilities of the Quality Assurance team, which are security, accessibility, functional, performance, automation, integration, regression, cross-browser/cross-device, UAT, crowd testing, process improvement, defect management and exploratory testing.

We are striving to do this in the following ways:

  • Testing resources to support Products and Projects
  • Early engagement for scoping Testing Requirements
  • Static testing on Project requirements to ensure clarity and un-ambiguity for all concerned, and on Projects designs to ensure they are in line with requirements
  • Governance and Assurance around testing performed by suppliers prior to delivery to CRUK Structured, Documented System Testing that provides tractability back to project requirements and strives to prevent functional defects from moving into UAT (User Acceptance Testing)
  • Providing specialist testing needs such as Automation, Load, Performance, Security, Accessibility and Cross Browser etc.
  • Managed, Structured co-ordinated UAT phases
  • Smoke testing in Production post release.
  • Regression testing to support future releases for 3rd Party systems and integrations.

In order to help us gather as much information as possible about your testing and resource requirements, please consult the QA Checklist below, before meeting with the QA Manager or one of the QA Leads.

  1. Background and overview of the project
  2. Project timeline
  3. Does it require internal and/or external development
  4. SDLC (Software Development Life Cycle) timeline
  5. Background of what the system does and any documentation, system requirements, designs, business cases etc.
  6. Details of the system infrastructure (e.g. Java / .NET / ASP / SQL Server / MySQL)
  7. Key Stakeholders involved
  8. Product/Project Team involved
  9. Has it been submitted to the PMO

Assurance approach

Projects should require sign-off or Test Summary Reports before being deployed to Production. This may be dependent on release process undertaken as part of respective products. Retrospectives could highlight whether the QA team were or weren't engaged earlier enough.

Exception process

The QA Manager and Project Manager should agree that this guide rail will not be followed for the defined projects/phases


Agile Products: BDD

Project Teams using an Agile approach for their Product Management, Design, Development and Delivery should use BDD (Behavioural Driven Development) to assist in getting product owners, business analysts, testers and developers working together.

BDD combines requirements and testing into a single approach, ensuring that requirements are written in a structured language that can be easily understood everyone in the team, plus it can be used to check that each feature has been delivered as specified, by automatically running a test.

img alt img alt

Reasons for using BDD:

  • Increased and improved collaboration amongst all members on the project.
  • High visibility into the project’s progression.
  • The software design follows business value.
  • The ubiquitous language is understandable by all the members of the team, which reduces misconceptions and misunderstanding and makes it easier for new members to join the working process.
  • Software development meets user need.
  • More confidence from the developer’s side that they won’t break code and have better predictability when it comes to their work.
  • By improving the quality of the code, you are reducing maintenance costs and minimising the project’s risks.

Automation

At present, we use Playwright as our test automation framework to automate our functional and e2e tests.

Steps & Roles

A summary of the steps and roles involved within the BDD process is shown in the table below

1. Discuss: Discussion or workshop to create a shared understanding of the required feature / user stories.
Identify real-life scenarios and examples that have realistic context. This could be the 3 Amigos session consisting of Product Owner, Developer & QA Person to identify the happy and sad path scenarios.
Role Participation: Business, Developer, QA

2. Distil: Distil the required feature into an executable specification based upon the user stories, examples and acceptance criteria. Specifications are kept in human readable form using the following format:
Feature/Story: Title
In order to [value]
As a [role]
I want [feature]
Scenario Outline: Title
Given [context]
And [more context]
When [event]
Then [outcome]
And [another outcome]
Role Participation: Business, Developer, QA

3. Develop: Develop the required feature using automated test-first practices. Automated acceptance tests are built around the identified scenarios. Automated tests are used to support unit and system testing.
Identify potential automation candidates suitable for unit/API/GUI testing
Role Participation: Developer, QA

4. Demonstrate: Demonstrate the implementation by running the acceptance tests and performing manual exploratory tests.
Role Participation: QA

Assurance approach
Running automated tests during each sprint demo and overnight execution of automated tests

Exception process
The QA Manager and Project Manager should agree that this guide rail will not be followed for the defined projects/phases


Non Agile Products: ISEB 8 Point Check

What is '8 Point Check' and Why is it important?

The purpose of the 8 Point Check is to review the project's Business or Functional Requirements in order to highlight any issues or ambiguities in them and ensure complete clarity for the entire project team, well before any action is taken on the Requirements.

The rationale for adopting the 8 Point Check is that if a tester cannot interpret the Business Requirements with utmost clarity, then the probability of scripting the tests incorrectly increases dramatically. The same analogy can be applied to the Development team. Unless the Requirements are defined very clearly, it is more likely that developers will not build what the Business intended to have. Identifying and resolving ambiguities in Business Requirements at the Requirements Definition Stage is a lot quicker, cheaper and easier to manage than go through the entire SDLC (Software Development Life Cycle) or find defects in the System Testing Phase. It is estimated that up to 60% of defects on projects can be traced back to ambiguities in the Requirements.

Prerequisites or Considerations

In order for the 8 Point Check to produce a valuable outcome, a Business or Functional Requirements document should be provisioned, as a prerequisite, for the project being worked on. Only then can the criteria of the 8 Point Check be applied to the Requirements.

At what Stage/Phase should the '8 Point Check' be executed?

The 8 Point Check should be executed as early as possible in the project stage, typically, when the first draft of the project requirements are published. The earlier any ambiguities can be identified and eliminated, the better the savings on the project resources will be. The results of the 8 Point Check should be shared with the Lead Business Analyst/Business Analyst, who will then address the feedback as appropriate and publish a revised draft version of the Requirements, clarifying any uncertainties in the process. The 8 Point Check should then be executed against the revised version and the exercise should continue until the Requirements document satisfies the 8 Point Check completely. The Requirements can then be sent forward for a business sign-off. The signed-off version should be considered as 'the baseline'. It has to be ensured that the Requirements are not base lined prior to passing the 8 Point Check.

The following are the 8 Points to check on each Requirement:

img alt

To execute the 8 Point Check, follow the Steps mentioned below:

  1. Download the attached template: CRUK 8 Point Check Template V0 03 DSW 221010.xlsx
  2. Save a copy in your project Directory under Static Testing.
  3. Copy in all of the requirements as per the format in this example.
  4. Work through each requirement in turn, answering each of the 8 criteria separately, using the definitions provided above.
  5. Use 'P' to pass it and 'O' to fail it
  6. Use 'Y' to 'Include' the rows containing Requirements. Use 'N' on the rows that contain Headings in order to exclude them.
  7. Add any notes or questions you have, in the 'Comments' section, as you work through the Requirements. It is good to put yourself in the mindset that you are trying to write a test so anything you are not clear of, can be noted down as questions to the Business or the Business Analyst.
  8. Complete the '8 Point Check' for all requirements and review the results.
  9. Send the document to the Business Analyst and arrange a meeting to discuss the details.
  10. Set the expectations and a deadline for the revised Requirements document.

Assurance approach

Completing the '8 Point Check' should be an Exit criteria for Requirements and Entry criteria for Development

Exception process

The QA Manager and Project Manager should agree that this guide rail will not be followed for the defined projects/phases


Integration Level Testing

Integration Testing is defined as a type of testing where software modules are integrated logically and tested as a group.

A typical software project consists of multiple software modules, coded by different programmers. Integration Testing focuses on checking data communication amongst these modules.

img alt

Integration Testing can be done in the following areas:

  • Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
  • Component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.
  • System integration testing: Testing the integration of systems and packages; testing interfaces to external organisations (e.g. Electronic Data Interchange, Internet).

Integration test procedure irrespective of the Software testing strategies

  • Prepare the Integration Tests Plan
  • Design the Test Scenarios, Cases, and Scripts.
  • Executing the test Cases followed by reporting the defects.
  • Tracking & re-testing the defects.
  • Steps 3 and 4 are repeated until the completion of Integration is successful.

Integration Test Approaches

1. Big Bang: Here all components are integrated together at once and then tested.
Advantages: Convenient for small systems.
Disadvantages:

  • Fault Localization is difficult.
  • Given the sheer number of interfaces that need to be tested in this approach, some interfaces link to be tested could be missed easily.
  • Since the Integration testing can commence only after "all" the modules are designed, the testing team will have less time for execution in the testing phase.
  • Since all modules are tested at once, high-risk critical modules are not isolated and tested on priority. Peripheral modules which deal with user interfaces are also not isolated and tested on priority.

2. Bottom Up: In the bottom-up strategy, each module at lower levels is tested with higher modules until all modules are tested. It takes help of Drivers for testing
Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
Advantages:

  • Fault localization is easier.
  • No time is wasted waiting for all modules to be developed unlike Big-bang approach

Disadvantages:

  • Critical modules (at the top level of software architecture) which control the flow of application are tested last and may be prone to defects.
  • An early prototype is not possible

3. Sandwich/Hybrid: It is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.

4. Top Down: In Top to down approach, testing takes place from top to down following the control flow of the software system.
Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
Advantages:

  • Fault Localization is easier.
  • Possibility to obtain an early prototype.
  • Critical Modules are tested on priority; major design flaws could be found and fixed first.

Disadvantages:

  • Needs many Stubs.
  • Modules at a lower level are tested inadequately.

Entry Criteria:

  • Unit Tested Components/Modules
  • All High prioritized bugs fixed and closed
  • All Modules to be code completed and integrated successfully
  • Integration tests Plan, test case, scenarios to be signed off and documented
  • Required Test Environment to be set up for Integration testing

Exit Criteria:

  • Successful Testing of Integrated Application.
  • Executed Test Cases are documented
  • All High prioritized bugs fixed and closed
  • Technical documents to be submitted followed by release Notes.

Best Practices:

  • First, determine the Integration test strategy that could be adopted and later prepare the test cases and test data accordingly.
  • Study the Architecture design of the Application and identify the Critical Modules. These need to be tested on priority.
  • Obtain the interface designs from the Architectural team and create test cases to verify all of the interfaces in detail. Interface to database/external hardware/software application must be tested in detail.
  • After the test cases, it's the test data which plays the critical role.
  • Always have the mock data prepared, prior to executing. Do not select test data while executing the test cases.

Steps & Roles

1. Integration Test Plan: It includes the following attributes: Methods/Approaches to testing.
Scopes and Out of Scopes Items of Integration Testing.
Roles and Responsibilities.
Pre-requisites for Integration testing.
Testing environment.
Risk and Mitigation Plans.
Role Participation: Business, Developer, QA

Integration Test Cases/Scripts: Integration Test Cases differs from other test cases in the sense it focuses mainly on the interfaces & flow of data/information between the modules. Here priority is to be given for the integrating links rather than the unit functions which are already tested.
Role Participation: QA

Integration Test: Please refer to Integration Approaches as described above
Role Participation: QA


UAT (User Acceptance Testing)

UAT stands for User Acceptance Testing, refers to the process where software product is handed over to the stakeholders or say client/end users. They use the application for a specific time period and approves or rejects the software product or requirement(s). The product is released for production, when it passes the user acceptance testing.

User Acceptance Testing (UAT), also known as beta or end-user testing, It is the final testing performed once the functional, system and regression testing are completed. The main purpose of this testing is to validate the software against the business requirements. This validation is carried out by the end users who are familiar with the business requirements.

img alt

Entry Criteria

Entry criteria defines what is needed to start the testing. Since these conditions impact the commencement of UAT testing, it is necessary for QA team to understand the project, requirements and define the entry criteria’s accordingly.

All of the following tasks should be completed before commencing user acceptance testing:

  • User story completed and signed-off (for each sprint in Agile)
  • Completion of System and Integration Testing
  • Completion of Regression Testing
  • Validations undertaken against the Functional specifications
  • All environment and access requests sorted out
  • UI Validations completed
  • Successful UAT sanity check is performed
  • Ensured that there are no open critical defects /showstoppers
  • Business Users identified
  • Ensure 100% test coverage in system testing as per acceptance criteria defined in sprint planning

End to End UAT Validation - Objectives

  • Applications are functional and stable
  • Critical business process are intact & work as expected
  • All transactions are properly integrated into the business process
  • Data flow and data integrity is maintained across applications
  • Ensure compatibility to browsers & operating systems if applicable
  • Business user review sought on the usability aspects
  • Performance of the application

UAT Exit Criteria

  • Acceptance of Business Flows against Business Requirements by End Users of the System
  • All UAT defects are fixed & UAT is signed off

img alt

Steps & Roles

1. Determine Correct User Roles: Define who from the team will be involved in the UAT process.
Role Participation: UAT Stakeholder, Product Owner, Business Analyst, QA Team

2. Choose the Type of Testing: Either in-person or self-paced, the type of testing is defined by the location of your team members
Role Participation: UAT Stakeholder, QA Team

3. Determine Time frames: Include a placeholder for UAT for the standard time frame your product team expects which could be 1 or 2 week depending upon mentioned factors

  • Environment Standards
  • Scope of Work
  • Availability of Team
  • Venue Availability

Role Participation: Product Manager, QA Team

4. Determine Documentation Standard: A list of standard documentation that is important to create and maintain

  • Testing Strategy and Plan
  • Test Case Outcomes
  • Requirements for Sign-Off

Role Participation: Product Manager, QA Team

5. Determine the Change Control Process: Changes may come up during UAT. The project may have a standard process to manage 'change requests' that can be followed for incorporating these changes when they are needed.
Change requests may vary from project to project and will need to be discussed between the project / product managers and teams, where the appropriate actions may be taken to deliver the change (i.e. adding the change request to the Product Backlog e,g, a new user story to deliver for a later sprint, or delivering as part of the current Sprint).
Role Participation: Product Manager, QA Team

Assurance approach

Completing the 'UAT Testing' should be an Exit criteria for QA process and Entry criteria for Production Deployment

Exception process

The QA Manager and Project Manager should agree that this guide rail will not be followed for the defined projects/phases


Test Summary Report

A Test Summary Report should summarise the testing carried out by QA or a supplier in relation to software or software component delivery.

Reports will vary based on the method of delivery and associated dependencies.

As a minimum a Test Summary Report should contain:

  • A summary of the progress of the testing through the test phase (planned against actual)
  • Testing Scope
  • The results of the test cases that have been executed
  • All the defects that have been raised and their history
  • Any outstanding actions

Contents of a Test Summary Report

The headings below are suggestions for what a Test Summary Report should contain.
The bullet points are examples or suggestions of content contained within those headings.
Relevance of the headings will depend on the type of testing being conducted and the application or system being tested.

Test Summary
  • Release details
  • Testing RAG status
  • Summary of testing carried out and results
    • Number of user stories
    • Number of defects raised
    • Number of defects resolved
    • Exit criteria met
    • Sign off acquired
  • Any risks,recommendations and agreed actions.
Test Environments & Tools
  • Integration
  • Staging
  • BrowserStack
  • Applitools
Testing Scope

Include all the user stories planned and tested for the release.

Document type of testing carried out :

  • Sprint release stories
  • Integration testing
  • Performance testing
  • Regression testing
  • Smoke testing (after deployment)

Out of Scope
Items not tested (due to constraints/dependencies/restrictions)

Test Results & Metrics

You can incorporate visual test and defect metrics along with a summary to facilitate the visualization of testing results i.e.

  • Details of any automated tests performed
  • Details of any manual regression tests
  • A summary of the performance testing carried out, the results and their location
  • Risks Issues and Caveats like
    • Problems with test environment access or availability
    • Automation tests failing, having to run tests manually
    • Potential service impacts
  • Best Practices Adopted or Improvements Implemented
    • Crowd testing
    • Updated Automation Regression Pack
Example Test Summary Reports

The examples below shows two variations of Test Summary Reports.
They all contain the key information required for stakeholders to make decisions regarding deployment or release based on the outcome of testing.

TCR - Online Fundraising Release 7.0
BA - 2019 January Release Test Completion Report

Assurance approach

Test Summary Reports should be completed and signed off before system/software is deployed to Production.
The Test Summary Reports should be stored in an agreed location and visible to key stakeholders

Exception process

The QA Manager and Project Manager should agree that this guide rail will not be followed for the defined projects/phases


Reviewing Unit Tests

One of the common arguments placed against this view is that , "Why do we need testers to review code when we have code coverage tools that allows for fully tested code?" But the phrase "fully tested" can be misleading. 100% code coverage does not equal high quality code and is just not enough. There are various other things that might get overlooked without a second pair of eyes looking at it especially one that lacks a QA point of view such as:

  • Missing tests
  • Missing requirements
  • Missing error handling
  • Incomplete tests
  • Tests could be missing important data and only testing with data that succeeds
  • Failing to test data that causes failures

Lets go through a couple of scenarios mentioned above taking the activity management - React form as an example. However this should apply to a lot of other products too.

Missing Functionality and Missing Tests

Data transformation requirements for First name field are:

  1. Remove spaces and init cap (Title Case)
  2. Remove spaces around any hyphens
  3. Capitalise the first letter of FORENAME
  4. Capitalise characters after hyphens and quotes

Unit tests to be reviewed

Unit tests for data transformation for First name field

it("capitalises initial letters", () => {
const { firstName } = transform({ firstName: "john" });
expect(firstName).toBe("John");
});

it("capitalises initial letters after spaces", () => {
const { firstName } = transform({ firstName: "lily rose" });
expect(firstName).toBe("Lily Rose");
});

it("capitalises initial letters after apostrophes", () => {
const { firstName } = transform({ firstName: "D'andre" });
expect(firstName).toBe("D'Andre");
});

it("capitalises initial letters after hyphens", () => {
const { firstName } = transform({ firstName: "Lily-rose" });
expect(firstName).toBe("Lily-Rose");
});

it("removes spaces around hyphens", () => {
const { firstName } = transform({ firstName: "Lily - Rose" });
expect(firstName).toBe("Lily-Rose");
});

Based on the requirements mentioned in the acceptance criteria, here are a list of possible test cases:

Test caseUnit test coverage
capitalises initial lettersYes
capitalises initial letters after spacesYes
capitalises initial letters after apostrophesYes
capitalises initial letters after hyphensYes
removes spaces around hyphensYes
removes spaces around apostrophesNo
removes extra spacesNo
Truncates spacesNo

Based on the table above, the 3 additional test cases that were not covered were added to the unit test suite. This also resulted in identifying the missing functionality and so respective defects were raised to fix the same.

Unit tests review checklist

  1. Does the code have tests?
  2. Although most of the test cases will be discussed in the 3 amigos session, it is important to carry out exploratory testing and think about edge case scenarios. This activity will help identify missing test cases as well as missing functionality. These scenarios can later be added to the existing unit tests if not covered already
  3. Check for incorrect tests - For example, consider a functionality that reverses an input string. If the test checks that the output string is not empty, it would still account for 100% code coverage although in reality nothing has been tested. It should rather check that the output string is actually reversed
  4. Validate assertions as some times they may be missing or it is very easy to be overlooked
  5. Look for duplicates. For example, in the above scenario if first name and last name are using the same data transformation functions it is enough to test each scenario against one field only rather than repeating the same set of test case for Last name again
  6. Move all validation scenarios that are being run as acceptance tests to the unit test suite as it is much more expensive to run at that level considering the time it takes to run and also because the feedback is received at a very later stage of the deployment cycle. Also, running the same scenarios as part of acceptance tests might take a significant amount of time to complete
  7. Look for duplicates between unit tests and acceptance tests - If there are repetitions, remove the acceptance tests to optimise the acceptance test pack
  8. Are all parts of the code testable (could the code be refactored to make testing easier?)
  9. Are you testing more than one thing inside one test (this can be due to the method/function wrapping different logic together)

For future reading

Once you get confident with reviewing and adding unit tests, you could also consider the idea of mutation testing.

Mutation testing improves the adequacy criteria of your tests. Adequacy criteria measure how thoroughly our test suite exercises the program under analysis.

A mutation testing framework introduces simple changes or mutants into the production code to check the fault detection capability of the tests. This is not a new concept as while developing test cases, all of us might have gone through this step for debugging purposes and to check if the method is performing the intended function.

Here is a sample mutation testing framework: https://stryker-mutator.io/stryker/quickstart

Summary

Unit tests are valuable to the quality of a product as it helps identifying faults in the system at the earliest stage possible. But a high percentage of unit test coverage does not automatically equal high code quality; the tests need to be effective, as well.

QA Engineers are required to review the unit tests to make sure all test conditions are met, to improve or supplement them with tests of their own, to keep the customer requirements perspective in mind when reviewing them, and to identify areas that are covered by unit testing that can help optimize the integration and system tests.


Testability

At its most basic level testability is used to determine two things:

  • How easy software is to test
  • How likely faults will be exposed

One of the key components of Testability is that it is owned by the ENTIRE team (Product/Dev)

Why do we need it?

High Testability positively impacts a team’s ability to deliver, providing:

  • Faster testing feedback loops (bugs easier to determine, locate, log)
  • Faster debugging (Actual fixes of relevant issues discovered)
  • More frequent and shorter releases cycles

This complements the shift left approach seamlessly, as it entails actively considering the quality and testability of a release or product throughout the entire software development life cycle (SDLC), including the planning, design, and development stages, rather than waiting until a testing cycle is completed to assess its state.

How is Testability achieved or measured?

Different teams may have different ways of working (processes, systems, etc), but if the following is adhered to then Testability can be achieved regardless:

  • Well documented requirements
  • Clearly defined User stories/Acceptance tests
  • Static analysis (examining the code without executing the program)
  • Static testing (eliminate errors/ambiguities such as requirements, design, test cases)
  • Applicable test artefacts (automated/manual test suite, test matrix/coverage)
  • Code reviews
  • Maintain Unit tests
  • Logs (describe what happened and where)
  • Accurate testing environments (replicate live instances)
  • Dependency injection (a technique where an object receives other objects that it depends on)
  • Mocks/stubs (mimicking functionality we don’t care about)

Conclusion

Testability ensures that we consider quality much sooner in the SDLC, enabling us to produce higher quality deliverables (features, products, etc) in a much more efficient and measurable way. Thus, should be incorporated into every product's SDLC process.

img alt


Testing in Production

Testing in Production means to perform various tests in a production state or live environment. Such testing at CRUK can include performing Smoke testing and OAT testing (CRM smoke tests).

When it comes to releasing a new product or new features of a product into Production, the primary focus of Production Testing is to test functionality that is critical to the application is working as expected. It is very crucial as these systems serve real world users and have a direct impact on customer satisfaction.

Its purpose is to quickly identify fundamental issues that would make further testing impractical. It allows testers to detect defects in real world scenarios, and ensures that the application works the way it is expected to after deployment.

Testers are usually given a separate QA environment to perform end-to-end testing and are not usually allowed to perform testing in a production environment due to various reasons like impact to user data , avoid sending out emails to user.

As per standard practice, only smoke tests are carried out in production environments. These smoke tests need to be planned and agreed with business teams carefully.

img alt

How should Testing in a Production Environment be Performed?

It is ideal to create a checklist for the testing that will be carried out in Production. This should cover the main functionality of the application, which is reviewed by the project team. Also, if required, the modules in the application can be divided into small chunks and distributed amongst the QA team to test and check for quality of the application. QA Engineers must ensure that a checklist is maintained and updated regularly with any new critical tests to maintain the quality of the application.

The monitoring of production servers is managed by the Release Team to ensure that all services are running smoothly and no glaring errors are flagged. This is done by assessing or monitoring audit logs on server or by using Dynatrace dashboard or admin pages in php sites.

Guidelines to Performing Testing in a Production Environment

Outlined below are some guidelines that must be followed whilst testing in a production environment:

  1. Make sure access to the environment is available / granted
  2. The aim is to test using real data as far as possible or to carry out static reviews for Siebel Releases
  3. Dummy data can be used as long as it is agreed with the business stakeholders
  4. Never modify existing live data as part of your testing
  5. Never try load testing on a production environment
  6. Any issues found during production testing are generally raised in JIRA/Service now and fixed as hot fix (if required).

Entry Criteria

  1. UAT has been signed off and a GO decision is given
  2. The code/release has been deployed to the production environment and communicated to the project/product team.

Exit Criteria

  1. All tests on the production test checklist (Monitor logs, Sanity test, OAT scripts, End-to-end connectivity) have been executed and no issues have been reported
  2. Any issues reported are logged and the project/product teams are made aware of these.

Quality Assurance approach in a Production Environment

The ultimate goal of Testing in a Production environment is to ensure that the application is stable and runs the way it is expected to in the production environment. Daily monitoring of the production/live environment gives confidence to the business that the application is running smoothly. E.g.

  • For CRM project - Server Components and Jobs, Active MQ queues, DLQ, Rectification errors, Audit logs.
  • Other projects - Set up dashboard for server CPU usage, memory usage monitoring .

Here are some advantages of Testing in a Production environment:

  1. Monitors the application’s performance in real-time scenarios. E.g.- Multiple users donating on same fundraising page.
  2. Monitors the API responses with peak traffic
  3. Helps with maintaining the quality of the application for user experience.

Test Coverage

In simple terms: What are we testing and how much are we testing? Test coverage helps monitor the quality of testing, and assists testers to create tests that cover areas that are missing or not validated.

How Test Coverage can be accomplished?

  • Test coverage can be done by exercising the static review techniques like peer reviews, inspections, and walkthrough
  • By transforming ad-hoc defects into executable test cases
  • At code level or unit test level, test coverage can be achieved by availing the automated code coverage or unit test coverage tools (static analysis tools). Note: Static analysis tools can help developers to understand the structure of their code, and can also be used to enforce coding standards
  • Functional test coverage can be done by converting acceptance criteria to tests or even using requirement traceability matrix as applicable.

Advantages of Test Coverage

  • Helps finding the area of a requirement not implemented by a set of test cases
  • Helps to create additional test cases to increase coverage
  • Identifying a quantitative measure of test coverage, which is an indirect method for quality check
  • Identifying meaningless test cases that do not increase coverage
  • It can assure the quality of the test
  • It helps with determining the paths in your application that were not tested
  • Prevents defect leakage
  • Time, scope and cost can be kept under control
  • Defect prevention at an early stage of the project lifecycle
  • It can determine all the decision points and paths used in the application, which allows you to increase test coverage
  • Gaps in requirements, test cases and defects at the unit level and code level can be found in an easy way
  • Useful in determining the exit criteria

Drawbacks of Test Coverage

  • There are no tools for automation, so most actions are manual
  • For legacy applications there can be a cost and time overhead for requirements and test case analysis

Code coverage

It is a metric related to unit testing. The idea is to measure the percentage of lines and execution paths in the code covered by at least one test case. Since code coverage is a unit test metric, tools for checking code coverage like Istanbul/ NYC can run alongside unit testing software like Mocha and Jest or are integrated with the unit testing software like PHPunit, so whoever is writing the software can check their code coverage. The code coverage tools given in the examples are used at CRUK. In Online Fundraising PHPUnit is used for code coverage of the web service, a report is generated in Clover format which then feeds into the Github Actions coverage checker. Code coverage measures the coverage of what has been written, i.e. the code itself; it cannot say anything about the software that has not been written.

Test coverage

It is the process you use to determine whether you’re testing everything you’re supposed to test. This includes the application itself (the codebase), the environment the application will run on (e.g. browser, devices, operating systems) the user demographics (including accessibility), the performance of the application and its security. Test coverage also includes whether requirements have been met from technical and user perspectives. Test coverage allows you to check the quality of your testing. By finding areas that aren’t currently covered, test coverage helps you develop tests for those areas, increasing the overall quality of your test suite.

Anyone with knowledge of the system or application being tested can contribute to the scope of test coverage.

At CRUK we use numerous tools and techniques increasing our test coverage capability, some examples:

  • Mocha, JUnit, Jest, PHPUnit for unit testing code
  • Applitools for device and browser compatibility testing
  • K6, Artillery and Lighthouse for performance testing
  • Lighthouse , @axe-core/playwright package as part of Playwright tests for accessibility testing
  • Playwright Test Library for functional and regression testing
  • Postman for API testing

Test Coverage Techniques

There are several areas for test coverage which are essential to examine with the respective coverage techniques:

img alt

  1. Product Coverage This measures the test coverage from Product Perspective where we measure which areas of the product have we tested? Example: The app must be capable to respond to the unusual actions of the user. It must be checked whether the multiple users can interact with a program at the same time. It is also important whether the application is able to function properly while other apps are opened on the same device. If at least one of these elements is not tested, it is impossible to define the testing as complete. Let’s say the application you’re testing is a simple calculator (only the four arithmetic operations available.) Of course, you should verify that the operations work as intended. But only that is not enough. There are other factors you must take into account. How well does the calculator handle really large numbers? (In other words, what’s the range it supports.) What is it's behaviour when the user does something unexpected, like pasting non-numerical characters into the text field or trying to divide by zero?

  2. Risk Coverage: This technique consists of listing the risks involved in the application and making sure they’re covered by tests. Example: Online Fundraising has a web service. This API should only be exposed to specific applications like Events or Activity Management, as it contains personal information of supporters. In addition to this donations can be made via the platform, so requires handing the donor over to a payments system and retrieving those details. Testing coverage should include all possible supporter journeys between applications to ensure the applications are integrated correctly and securely and there is no risk of data extraction or interception by unauthorised sources. If you want to say that your application is covered, you have to list all of its relevant risks and answer the questions they elicit.

  3. Requirements Coverage: This stage aims to define whether the developed product matches the expected requirements and is capable to perform its purpose. Example: Online Fundraising has a Strava integration. Requirements were derived from business users and end user tests. Requirements coverage included checking the technical functionality of the integration for example: a giving page could be successfully linked to a Strava account. Requirements coverage also included functional testing from a user perspective, including crowd testing to get interactions from a wide user base on multiple devices, to ensure users were able to use the functionality set out in the defined requirements.

  4. Compatibility Coverage: It defines the integration of the program with different browsers and platforms. It shows the issues in cooperation with devices that have various settings and configurations.


Observability in Testing

What is Observability?

To know the answer, let's look at some of the common challenges faced by a product team:

  1. Passing tests with zero failures, all green dashboards, and yet there is a hotfix to do after a deployment.
  2. Having to deal with ServiceNow tickets which are about an unusual error faced by a user that is hard to reproduce.
  3. Spending too much time trying to understand an alert or what caused the alert rather than fixing it.

If you feel you have been impacted by any of these challenges, then it is easy to think "Monitoring is the answer", and you'd be right. Monitoring is good, but it's not good enough. Observability helps the development team deal with the unknowns mentioned above. Rather than trying to trace a user error or spend a lot of time trying to understand the user behaviour, observability takes care of all that and lets the development team focus their efforts and time towards the more meaningful act of fixing the actual problem.

To quote Test Automation University - "With monitoring, we have a way to track, identify and lock in expected behaviour across a wide diversity of systems and we can do this in a fairly standard way. In addition, monitoring lends itself to automated alerts if any system expectations are breached. In contrast, observability is what, the people creating the system and that are on the receiving side of that alert need, to be effective at their jobs. Observability targets supporting the unknown."

How observability impacts testing?

While we do a significant amount of testing in our local and pre-prod environments, we are unable to perform any deep-level testing in the production environment owing to security and data protection issues. To think that we do so little testing in that one environment, which will take us closest to our user base sounds very ironic. Observability is a real-time testing methodology that helps the development team in understanding and predicting system failures with real users acting as our testers.

Here is how Thoughtworks believes Observability supports testing:

1. Exploratory Testing: During exploratory testing, you might find unexpected behaviour inside the system. Observability can help you dig deeper and find out what the cause might be. Sometimes you won’t find a solution; but with observability tooling that can give you a detailed insight into logs, metrics, and traces, you will be better placed to share more information with developers, making it easier to collaborate and work together to find a solution.

2. Decision making: Testers are typically curious. They like to explore and are great at asking questions. With observability tools, they can explore a product in great depth and detail. This allows them to uncover valuable information that can guide their decision-making when testing.


Datadog

Datadog is a SaaS monitoring and security platform that combines the three pillars of observability in one unified platform to enable full visibility and insight into the metrics, traces, and logs across your applications, infrastructure, and third-party services.

Datadog’s out-of-the-box dashboards allow you to analyse data from across your entire system in a single pane of glass. Template and auto-generated dashboards enable your team to immediately benefit from dynamic views with no query language or coding required. You can also customise your dashboards with a vast library of visualisation tools and drag-and-drop widgets to best meet your team’s needs.

Datadog solutions

Some of the Datadog solutions that are useful from a QA perspective are as follows:

1.Real User Monitoring (RUM) & Session replay

With Datadog's real user monitoring, you can analyse product usage (based on mobile, web, browser, country, or any custom dimension), troubleshoot frontend errors, and identify frontend performance issues that are impacting users. For QA, this means that we can track errors, analyse performance, extract browser/device usage data very easily and effectively. All of this information can be used to establish the test strategy for your product.

The session replay feature allows us to capture, observe, and analyse the user experience of your applications, meaning less time spent on debugging and troubleshooting unusual behaviour or edge case scenarios within your application.

Datadog's frustration signals feature helps you identify your application’s highest points of user friction by surfacing moments when users exhibit frustration. RUM collects three types of frustration signals:

  • Rage Clicks - A user clicks on an element more than three times in a one-second sliding window.

  • Dead Clicks - A user clicks on a static element that produces no action on the page.

  • Error Clicks - A user clicks on an element right before a JavaScript error occurs.

    2.Synthetic Monitoring

It is a normal practise to run lightweight smoke tests in the Production environment. There are two potential problems here:

  • The system closest to our users does not get tested as much as the pre-production systems.
  • We get to know of any production errors only after the user encounters them, which leads to a potential hotfix and delays in availability. Datadog's Synthetic Monitoring lets us create and run scheduled tests regularly in any environment. We can regularly test our production system, not just monitor it and find errors even before the user does.

Since Datadog offers a combination of browser-based tests with code constructs, it gives us the flexibility to achieve test automation in areas that were previously not possible. For example, Datadog has an email-checking functionality that will send emails to a Datadog email address that gives us the ability to perform various assertions on these emails.

Datadog Synthetic Monitoring allows you:

  • to create code-free tests that proactively simulate user transactions on your applications and monitor key network endpoints across various layers of your systems.

  • quickly detect user-facing issues with API and browser tests, and Shift testing to the left by incorporating Synthetic tests in your CI pipelines for early issue detection and remediation.

    3.Continuous Testing

Datadog Continuous Testing offers a set of tools that enable you to automate software testing for a product’s entire lifecycle. Test even earlier and across every level of your application stack. By offering code-free and reliable end-to-end testing and integrating seamlessly with popular CI providers and collaboration tools, Continuous Testing helps you accelerate application development and ship high-quality features faster. It saves QA effort through time-saving and highly scalable features, such as a codeless web recorder, parallel test runs, and built-in multi-location testing.

4.Error Tracking

Datadog's error tracking feature automatically aggregate similar errors into issues based on error type, error message, and stack trace. Datadog also lets you receive alerts when it identifies new issues so you can be on top of any potential problems rather than receiving an alert for the same issue repeatedly, leading to digital clutter. You can investigate errors down to lines of code and link error stack traces to the relevant source code.

Datadog onboarding

Prerequisites
  1. You should be able to access Datadog from the Microsoft Office 365 page - Applications section.
  2. If you do not have access to this application, contact the Cloud and Hosting team for support or post your request on the #ask-sysops slack channel.
  3. Ensure you have access to the CI and UX monitoring sections in Datadog
  4. Have a look around the Datadog console and familiarise yourself with the interface basics.

Datadog has extensive, step-by-step documentation for all of its apps and integrations.

To set up your first synthetic browser test, please go through this Datadog - Browser testing documentation.

You can also refer to some synthetic tests already created in the CRUK Datadog app for reference.

References