Skip to main content

Test Runs

Learn how to create, execute, and manage test runs in Qase, from express runs and the test wizard to configurations, result statuses, and re-testing failures.

Overview


A test run is a single execution of a set of test cases — the moment your test design meets reality.

Overview

Every test run takes cases from your repository (individually, by suite, or from a test plan) and produces results. Runs are the primary unit of execution in Qase: they track who ran what, how long it took, what passed, and what broke.

There are two ways to start a run — express and regular — and two ways to execute one: manually through the wizard, or automatically via API reporters. This article covers the manual workflow end-to-end.

Express run vs. regular run

Express run — Start directly from the repository. Select one or more cases, hit Run, and you're executing immediately. Best for quick smoke tests or verifying a single fix.

Regular run — Start from the Test Runs page using Start New Test Run. This opens the full creation modal where you configure properties, choose cases from multiple sources, and set up assignments. Use this for planned regression cycles, sprint testing, or anything you want to track formally.

Both paths open the same creation modal — express runs just pre-select the cases you chose in the repository.

You can also schedule test runs to create runs automatically on a recurring basis.

Run properties

When creating a run, you can configure the following:

Property

Purpose

Title

Defaults to today's date. Replace with something meaningful — Regression – v2.4 or Checkout smoke test.

Description

Context for your team: what's being tested and why.

Environment

Which environment this run targets (e.g., staging, production). Environments are defined in project settings.

Milestone

Links the run to a release milestone for tracking progress toward a ship date.

Configuration

Hardware/software combinations to test against. See Configurations below.

Custom fields

Any custom fields configured for runs in your workspace.

Choosing test cases

You can populate a run from three sources:

  1. Repository — Browse and select individual cases or entire suites.

  2. Test plan — Pull all cases (or a subset) from an existing test plan.

  3. Saved queries — Use a saved QQL query to dynamically select cases matching specific criteria.

Configurations

Configurations let you specify the hardware or software matrix a run should cover. They're managed per-project under project settings.

How configurations work:

  1. Create a configuration group — a category like "Browser" or "Operating System."

  2. Add individual configurations to the group — Chrome, Firefox, Safari under "Browser," for example.

  3. When creating a run, select configurations from any group. If you select from multiple groups, Qase creates the cross-product: selecting Chrome + Firefox from "Browser" and Windows + macOS from "OS" produces four configuration combinations.

This is useful for compatibility testing where the same cases need to run across multiple environments. Each configuration combination gets its own result set within the run, so you can see exactly which platform had failures.

Configuration groups and individual configurations can be edited or deleted after creation.

Executing a test run (the wizard)

The run wizard is where actual testing happens. Open it by clicking any case in a run, or use Open Wizard from the run dashboard or the runs list menu.

In the wizard, you work through cases sequentially:

  • Record results for individual steps and the overall case (Passed, Failed, Blocked, Skipped, Invalid).

  • Add comments explaining what you observed.

  • Attach files — screenshots, logs, anything that supports the result.

  • File defects directly from a failed case.

  • View or edit the test case in a new tab if you spot an issue with the case itself.

The wizard respects your project's test run settings (configured under Project Settings → Test Runs):

Setting

What it controls

Fast Pass

Skip the details modal when marking a case as passed or skipped.

Fail case on step fail

Failing any step automatically fails the entire case.

Auto passed

Passing all steps automatically passes the case.

Auto assignee

The first person to open an unassigned case in the wizard gets assigned automatically.

Assignee result lock

Only the assigned user can submit results; others must reassign first.

Redirect after adding result

Where the wizard navigates after you submit — same case, next untested case, or next case in suite.

Assigning cases

Cases can be assigned to individuals or to groups. Group assignment supports two strategies:

  • Even distribution — Cases are split equally by count regardless of complexity.

  • Load-balanced distribution — Cases are distributed by estimated duration (based on historical execution times), so each person gets roughly equal work time rather than equal case count.

The default strategy is configured in Project Settings → Test Runs.

Test run results and statuses

Every case in a run gets one of five default result statuses:

Status

Meaning

Passed

Expected results match actual outcomes.

Failed

Discrepancy between expected and actual results. Triggers the defect creation workflow.

Blocked

Execution prevented by a dependency, environment issue, or other external constraint.

Skipped

Intentionally not executed this run.

Invalid

Case could not be completed; results are inconclusive.

Custom result statuses

Default statuses don't fit every team's workflow. You can create custom statuses in Workspace → Fields → Result status (requires admin permissions).

When defining a custom status, you choose whether it represents a positive or negative outcome. Negative statuses trigger the defect creation workflow, just like Failed. Custom statuses can be hidden but not fully deleted — this preserves historical result data.

Custom result statuses require a paid subscription.

How run status is calculated

When you complete a run, Qase assigns an overall status:

  • Passed — Every case has a positive result.

  • Failed — One or more cases have a negative result.

Muted tests are excluded from this calculation. Once assigned, the run status is not recalculated for subsequent changes.

The run dashboard

Click a run's name to open its dashboard. This is your command center during execution.

Key areas:

  • Donut chart — Visual breakdown of cases by status. Click a segment to filter the case list.

  • Completion rate — Percentage of cases with results.

  • Run details sidebar — Environment, configurations, milestone, tags, author, start time, estimation, duration, and linked external issues.

  • Defects tab — All defects filed during this run with reporter, assignee, and integration status.

  • Team stats tab — Per-person metrics: work time, results submitted, and performance trend.

Estimation vs. duration: The estimation predicts how long the run will take based on historical execution times for the same cases. Duration shows actual elapsed time.

Total time vs. elapsed time: Total time is the sum of individual case execution durations. Elapsed time is wall-clock time from first result to last — a more accurate measure of how long the run actually took, since it includes setup, transitions, and overhead.

Timeline view

For automated runs, the timeline chart visualizes execution as a horizontal bar graph. Each bar represents a test case, with length proportional to duration and color indicating status.

Use it to:

  • Spot slow tests — Unusually long bars indicate performance bottlenecks.

  • Identify coincidental failures — Multiple failures at the same timestamp suggest environmental issues (network outage, infrastructure problem) rather than test defects.

  • Detect inefficiencies — Gaps between executions may mean your automation framework is running sequentially where it could parallelize.

Sharing and exporting

  • Share report — Generate a public link to share run results with stakeholders who don't have Qase accounts.

  • Export — Download results as CSV or PDF.

Syncing edits to test cases

If a case in an active run is still untested, edits to the source case in the repository sync automatically (refresh the page to see updates).

Once a result is recorded, the case snapshot is locked to the version that was tested. This is intentional — if you tested against version A of a case and someone later updates it to version B, your result should reflect what you actually tested, not what the case looks like now.

Completing a run

A run can end in two ways:

  • Complete — All cases have results (or you manually mark the run complete).

  • Abort — End the run early, leaving some cases untested.

If Auto complete is enabled in project settings, the run completes automatically when every case has a result.

Adding results to closed runs

By default, completed runs are locked. Enable Allow to add results for cases in closed runs in project settings if your team needs to submit late results or retest after completion. Open the closed run via the wizard, click the edit icon on a case, and add your result.

Cloning runs

Clone a run to re-execute the same (or a subset of) cases. From the run's menu, select Clone Run and choose which cases to include.

The most common use case: clone a run with only Failed cases to retest after fixes. You can filter by any combination of result statuses when cloning.

Bulk operations

In the run dashboard, select multiple cases to:

  • Assign or reassign them.

  • Submit a result for all selected cases at once.

  • Push selected cases to retest.

  • Delete cases from the run.

In the runs list, select multiple runs to delete them in bulk via Update selected → Delete.

Automated test run settings

When results come from API reporters rather than manual execution, additional project settings control how Qase handles incoming data:

Setting

What it controls

Auto create test cases

Create new cases in the repository when a result references a case that doesn't exist.

Auto update test cases

Update existing case fields (steps, title, etc.) when a result includes changes. You can limit which fields and case types are updated.

Use test case titles from repository

When enabled, results display the repository case title rather than the title sent by the reporter.

Mark cases as automated

Automatically set the automation status on cases that receive API results. Required for the qase.fields automation property to work.

Re-testing failures

You don't need to create a new run to re-test failures. There are two approaches:

  1. Retest within the run — Select failed cases (individually or via bulk actions) and push them to "retest" status. This clears their result so you can execute them again and record a new outcome, while preserving the original failure in the result history.

  2. Clone with filtered statuses — Clone the entire run but include only cases with specific results (e.g., Failed and Blocked). This creates a fresh run focused exclusively on the cases that need attention, which is useful when the retest is a separate effort with different assignees or timing.

We recommend retesting within the run for quick follow-ups and cloning for formal retest cycles that need their own tracking.

FAQ

Can I add cases to a run after it's started?
Yes. Open the run, select Edit Run from the menu, and add cases from the repository.

What happens if I re-run a case that already has a result?
The original result is preserved. A new result is created, and the case shows a "retest" status until you submit the new result.

Why doesn't my run status update after I changed a result?
Run status (Passed/Failed) is calculated once at completion and is not recalculated afterward. This is by design — it reflects the state of the run at the time it was closed.

How is estimated time calculated?
Qase averages the execution time of each case across previous runs. If a case has never been run, it's excluded from the estimate.

What's the difference between aborting and completing a run?
Completing a run means all cases have results (or you're done and want to finalize). Aborting ends the run with some cases still untested. Both produce a final run status, but aborted runs clearly indicate that testing was cut short.

Did this answer your question?