Test runs
Last updated
Last updated
A Test Run is a single instance of executing a specific set of test cases.
A Test Run may consist one Test case, a bunch of them, whole sets of Test cases (Test Suite), or even Test cases from different areas bundled together in a Test Plan.
There are two ways to start a Test Run: 1. Express Run - from the Project's Repository. 2. Regular Run - from Test Runs page.
You can quickly set up a Test Run for one or more test cases directly from the Repository.
Hit the "Run" button and proceed to select the Environment, Milestone, Configurations, or any other fields, as needed.
To create a regular test run, navigate to "Test Runs" section, and hit the "Start New Test Run" button. You will see the same modal window as seen while creating a Test Run from the Repository page.
\
Run title: it is automatically set to the current date. You can replace it with your preferred title.
Eg: "Regression Test - Release 2.0"
Description: give additional details about the test run.
Eg: "This test run verifies new features and bug fixes in the latest release.
Environment: define in which environment the test run should be performed.
Eg: testing, staging, production.
Milestone: select which Milestone tied to the Test Run.
Eg: Release 2.0
Configuration: choose from pre-defined configuration options.
Eg: OS: Windows ; Browser: Chrome
Custom Fields: set custom field values if previously configured.
Choosing Test cases: Here, you'll be presented with three options.
From Respository.
From Test Plan: You can choose all or select Test cases from a Test Plan.
NB: Queries are available with Business and Enterprise subscriptions.
Here, you can look at all of your Test runs, with their Author, Environment, Time Spent and Status with a visual summary of their results.
A Test run Status will be assigned once the run is marked as complete.
A 'Passed' status is given only if all* tests have a positive result. One or more test cases with a negative result will automatically assign a 'Failed' status to the run.
Please note, status once assigned will not be re-calculated for any subsequent changes.
* Muted tests are not considered when calculating the final run status.
Your test runs have the following default statuses available to reflect the outcome of test cases:
Passed - The test case has been executed successfully, and the expected results match the actual outcomes.
Failed - The test case has failed, indicating discrepancies between the expected and actual results.
Blocked - The test couldn't be executed or completed due to some blocking issue, such as a dependency not met, environmental issues, or other constraints.
Skipped - The test case was intentionally skipped during the test run, often due to its non-applicability or other reasons.
Invalid - The test run wasn't completed for some reason, and the results are inconclusive. Test run results statuses can also be customized by navigating to the Fields section in Workspace management and clicking "Result status" (Admin permissions is needed to modify fields).
You also have the option to define the action associated with your custom statuses where you can determine if a status triggers a successful test case or a failed test case. If a status triggers a failed test case then the defect creation workflow is triggered.
Please take note that you only have the option to hide custom statuses once deleted to preserve the results history in the test runs. Also, Custom Test Run "Result status" is only available to users on a paid subscription.
To go to the Run dashboard, simply click on the Test Run name.
Let's explore the options available in the dashboard -
1**. Open Wizard:** this will guide you through the Test Cases contained in the run, step-by-step.
2. Share Report: turn on public link and easily share your Run report with anyone, even if they don't have an account in Qase.
3. Export: you can download a CSV export of your Test Run.
4. (---) Menu: Clone run option allows you to create a new identical copy of your run. You can edit the run details, abort it or go the Test Run settings from here.
5. Search and Filters: search cases by their title or Id and filter by a specific parameter.
6**. Defects and Team stats:** view all associated defects from this run and see stats for results by assignee.
7. The "..." menu button: has options to Run wizard, assign case to a team member or View/edit the Test case itself.
8. Run details: this side bar houses a completion chart and other test run configuration details like -
Environment, Configurations, Milestones and tags
Run status, author, create date and estimated time for completion.
Linked External Issues.
9. Bulk actions: You can select multiple test cases to assign/re-test/submit result/delete with the respective button.
You have the flexibility to assign test cases within a test run either to individual users or to groups.
Individual assignment:
You can assign specific test cases to individual testers. This approach is particularly useful when you want to assign a suite of test cases related to a specific feature to a tester with specialized knowledge in that area.
Group Assignment:
In this assignment mode, you can select a group that includes multiple testers.
Even distribution: this strategy ensures that test cases are distributed as evenly as possible among the group members. For instance, if there are 10 test cases and 5 testers in a group, each tester will receive 2 test cases.
Load balanced distribution: this strategy optimizes the distribution of test cases based on their complexity, measured by previous test durations with the goal of evenly distributing the workload among group members.
For instance_, If there are five test cases with expected durations of 30, 10, 10, 5, and 5 minutes, and two testers in a group, the system will assign the 30-minute test case to one tester and the remaining four test cases (with a combined duration of 30 minutes) to the other tester. This way, both testers have an even workload despite the number of test cases being different._
Assignment settings:
The default assignment strategy can be configured under the Test Runs tab in your Project's settings.
This setting determines the default assignment strategy that appears when you attempt to assign cases to a group.
Defects Tab:
Team Stats Tab:
Opening is the wizard is easy, just click on a test case in a Run. You can also get to the wizard in the following ways:
Click "Open Wizard" in the Dashboard:
Click 'Open Wizard' in the Test Runs Menu
Click "Run Wizard" in the "..." menu of a Test Case:
In the wizard, can you advance through Test Cases, add comments, attachments, and log results for both individual steps and the entire Test Case.
You can also file defects as you work through the test case. Check the defects article for more information.
In the wizard, use the "View/Edit Case" buttons to open a test case in a new tab for viewing or making changes.
After finishing a Test Run, you can add results by enabling "Allow to add results for cases in closed runs" in settings. Then, go to Test Runs, open the run using the "Open Wizard" option. In the wizard, click the edit icon (pencil) to adjust the run duration, add comments, and attachments.
Save changes with the green check mark or discard them with the red cross.
Under Project Settings, there's a dedicated section for modifying run behaviour.
\
Let's look at each option in more detail:
Option | Behaviour |
---|---|
Fast Pass |
|
Default create/attach defect checkbox |
|
Auto complete |
|
Auto passed |
|
Auto assignee |
|
Auto create test cases | For results reported via API or an API reporter.
|
Fail case on step fail |
|
Allow to add results for cases in closed runs |
|
Assignee result lock |
|
Redirect after adding result |
|
Default Assignment Strategy |
|
\