The idea behind task-based usability testing is that every application’s user experience is comprised of a series of steps along the user’s journey, each of which must be optimized to guide the user to his or her end goal.
Each task, then, while part of a larger overall experience, is also a unique opportunity to create an intuitive and seamless interaction for the user. It is in finding where we fail to do this that we are able to improve our websites and applications.
Quantifying task usability
So if tasks are the building blocks of usability testing, is there a way to think quantitatively about the individual usability of the tasks we ask our testers to complete?
Qualitative feedback identifying problem areas is an invaluable output of user testing, but it does not allow us to compare usability across tasks and see the relative weight users assign to the problems (or lack of problems) they faced in each separate task.
With the implementation of the System Usability Scale, we complemented qualitative-type feedback with a way to measure and quantify overall system satisfaction and usability; but even a short 10-item questionnaire like SUS would quickly become burdensome for testers when applied repeatedly after every task.
Measurable user testing metrics like number of clicks or time taken per task are useful in getting a handle on the effectiveness and simplicity of tasks, and are built into the bones of usability testing anyways, but are not comprehensive. They require a great deal of extrapolation, and are better suited for benchmarking and setting targets.
The Single Ease Question
A more tightly focused method, which does not pile significant amounts of time, effort, or complexity onto the tester (or the researcher), is the Single Ease Question: the SEQ.
Like SUS, PSSUQ, or ALFQ, the Single Ease Question uses a Likert Scale-style response system, but the similarities stop there. As its name implies, SEQ is just one question: “How difficult or easy did you find the task?” And the response scale has 7 points, not 5.
This adds room for more nuance and a greater diversity of responses, while still preserving the only-one-question simplicity of the SEQ.
The Single Ease Question has been found to be just as effective a measure as other, longer task usability scales, and also correlates (though not especially strongly) with metrics like task duration time and completion rate.
In addition to its usefulness as a quantification tool, the SEQ can provide actionable diagnostic information with the inclusion of one more query: “Why?”
MeasuringU, of SUPR-Q fame, recommends asking testers the reason behind their ranking for scores of 5 or less (on a scale of 1-7) to get to the root of sub-par performances. Though this doubles the length of this short survey, the critical value it adds is in tying feedback to a causal relationship with specific problems that you can then act on to improve your website.
Interpreting and sharing the SEQ with UX Diagnostics
As part of our effort to provide a full range of both qualitative and quantitative perspectives on UX research and usability testing, TryMyUI has added the SEQ to our UX Diagnostics toolbox to help you understand the usability not only of your website as a whole, but also of the individual steps on the user’s journey through it.
With UX Diagnostics, a UX researcher’s quantitative dream, you can easily see the journey of each user, task by task, as well as the average experience overall. This is a great way to hone in on the most interesting users before you ever even watch a video! Even if you’re just doing some simple A/B testing on your website or mobile app. Question: What’s the difference between A/B testing and usability testing? Answer!
The SEQ is included in the Team and Enterprise plans as part of the UX Diagnostics feature suite. Get started with measuring the usability of your website by clicking below!
Do more with TryMyUI: User Experience (UX) vs Graphical User Interface (GUI)
3 thoughts on “Measuring task usability: The Single Ease Question (SEQ)”
[…] Single-Ease Question, or SEQ, is another of our quantitative usability testing tools, used to rate the difficulty of the […]
[…] data from the Single-Ease Question measuring task usability is shown for two different sprints. It’s immediately apparent from this graph that Sprint 3 […]
[…] also said that going to the checkout screen was an extra step that dissuaded her from assigning a task usability rating of 7 (Very easy). So, although she knew how to find the price, she had to go to another screen to […]