Attempts

Prev Next

Summary

The Attempts page provides leaders with insight into all assessment-based content attempts made by users within their selected audience(s).

It highlights pass/fail performance, incomplete attempts, average scores, and which content items may require review due to repeated failures or low performance. This helps leaders understand assessment effectiveness, user readiness, and where content or question design may need improvement.

Visuals have a maximum lag of 6 hours, typically significantly less.


Filters

Use the filters at the top of the page to refine the dataset:

  • Structure Name / Audience Name / Position Title / Manager Name — isolate assessment activity for specific teams, roles, or reporting lines

  • Thrive Status (active, new) — suspended users are excluded by default

  • Thrive Role (learner, learneradmin) — platform super administrators are excluded

  • Content Title / Content Type / Content Tag — drill into specific assessment items or categories

  • Additional Field Key (Custom Field) — choose segmentation fields such as Region, Department, or Brand

    • Additional Field Value — narrow down to a specific value

  • Attempt Date filters — adjust the timeframe for which attempts are displayed

These filters help you understand assessment performance for the right segment of your organisation.


Visuals


What You Need to Know

This panel provides key details about how the Attempts dataset behaves:

  • This page focuses on assessment-based attempts, showing pass, fail, and incomplete outcomes

  • The dataset only includes attempts with a valid content ID and registration ID

  • Attempt performance is calculated based on the last attempt within the chosen time range

  • Export frequency should be limited for performance


Top Tips

Useful suggestions for getting more value from this dashboard:

  • Use Attempts by Content Title & Attempt Status to identify which assessments have the highest fail or incomplete rates

    • High failure rates may indicate outdated or unclear content

  • Once a problematic content item is identified, use

    Attempts by question description and outcome (via Explore)

    to narrow down which question(s) may need rephrasing

  • Use monthly average score trends to understand seasonal patterns or training periods that influence performance


Passed Attempts

Shows the % of all attempts (passed + failed) that ended in a pass.


Failed Attempts

Shows the % of all attempts (passed + failed) that resulted in a fail.


Average Score

Displays the average % score across all pass and fail attempts.


Incomplete Attempts

Shows the % of attempts that were incomplete (did not reach a full submission).

Helpful for spotting:

  • users exiting assessments early

  • technical issues

  • assessments that may be too long


Monthly Average Score

A line graph showing the average score achieved for all attempts in each month.

How to use this visual:

  • Identify months where average scores dip — may indicate onboarding periods or challenging content

  • Compare monthly patterns to training cycles

  • Spot unusually high or low performance trends


Attempts by Content Title & Attempt Status

A segmented bar chart showing the proportion of:

  • 🟢 Passed

  • 🔴 Failed

  • 🟡 Incomplete

for each assessment-based content item.

How to use this visual:

  • See which content items have unusually high failure or incomplete rates

  • Identify which assessments may need review or rewriting


Attempts Detail

A detailed table listing every assessment attempt, including:

  • Content Title

  • Manager

  • User Name

  • Last Attempted

  • Attempt Success Status (Passed / Failed)

  • Attempt Completion Status (Complete / Incomplete)

  • Score (%)

  • # Passed Attempts

  • # Failed Attempts

  • # Attempts

How to use this visual:

  • Sort by Last Attempted to see the most recent assessment activity

  • Sort by Score to identify low performers

  • Identify users who have repeatedly failed the same content