Increased developer's efficiency in conducting accessibility testing by 55%
Context
This industry-sponsored project aimed to improve Microsoft's Accessibility Insights for Web for developers with little to no knowledge of accessibility. It is a tool that aids web developers in evaluating and enhancing webpage accessibility through comprehensive manual and automatic tests.
Duration
13 weeks
Team
Microsoft Product Manager, 2 UX Designers, 1 UX Researcher
Role
User Research,
Product Design
Overview
Problem
Developers found the tool challenging to use during their first few attempts at accessibility testing.
Overview Page is overwhelming and inefficient in summarizing assessment.
Navigation on both the overview page and within tests is confusing.
These issues were especially pronounced for Novice Accessibility Developers, i.e, web developers with little to no knowledge of accessibility. We realized that they had a steeper learning curve than those users with prior accessibility knowledge.
Solution
The tool’s information visualization and navigation were improved to make accessibility testing more intuitive for developers.
Visualisation of the assessment progress and overall information heirachy of the overview page was improved.
Enhanced information retrieval on both overview and sidebar
Research
Scope & Measuring Success
The aim was to improve the learnability of tool for Novice Accessibility Developers (NADs)
Why did we focus solely on the experiences of novice accessibility developers?
Solve usability issues for users with more knowledge of accessibility as well
Reduce the perceived complexity and effort of accessibility for those unfamiliar
Why prioritize learnability?
Since NADs encountered challenges during their initial attempts, we focused on learnability, assessing how easily users complete a task on their first try and the repetitions needed to become proficient.
What is our measure of success?
Reduce Time on Task for the first time they perform the task
Achieve Improvement in Learning Curve
The following guide was used to direct our study.
Background Research
I conducted secondary research and tool walkthroughs to identify tasks challenging for novice accessibility developers.
Online reviews about the tool were obtained from Chrome store, Accessibility Insights’ Github community and Stack overflow.
I performed main assessment using the tests provided in the tool and used Figma and Notion to annotate issues. I hypothesized that users might want quick access to failed tests to revisit later.
Mixed Methods Learnability Study
We conducted a Learnability Study collecting time on task, number of errors and think aloud feedback
2 out of 5 participants had prior knowledge of accessibility testing
The study consisted of three 30 minute trials per participant.
Participants performed the same 3 tasks across all trials.
a) Open the tool and conduct a quick and dirty assessment
b) Assess the website for the keyboard navigation test in Quick Assess
c) From a completed assessment note the number of failed tests and their instance details
They had a 2-3 day gap between each trial (average frequency during first use of tool)
Post-study interviews were conducted along with collecting SUS scores.
Quantitative Data
We discovered that the tool became learnable after two trials, shifting our focus to the issues impacting first-use efficiency.
The average Time on Task (ToT) for Task 1 and Task 2 decreased over successive trials. For Task 3, there was an increase in ToT.
The error rate for all tasks dropped to 0 from Trial 2.
By the third trial, users efficiently completed all tasks, highlighting the tool's learnability. Despite achieving a zero error rate, Time on Task for the third task increased. Insights from think-aloud sessions revealed that as users grew accustomed to the tool, they sought additional ways to report failures.
Qualitative Data and Prioritisation
We further scoped the project to prioritize critical usability issues on the overview, side navigation, and onboarding.
We affinity mapped the qualitative data from Think Aloud and Post Study Interviews.
I conducted visual analysis of different ways users reported failure instances.
Using the insights we found across the tool, we prioritized them based on impact and cost.
Findings and Design Implications
Our findings were categorized into four themes: Navigation Design, UX Copy, Visibility of Status, and Discoverability
Top Findings
Design Implications
Users struggled to promptly navigate to their required test category from the overview page and side navbar.
Clearly label test categories, improve the heirarchy and number of elements
Users were not able to report the assessment progress and progress of each test sufficiently from the overview.
Incorporate information graphics for progress and provide clear visual indictors.
Users were confused about the meaning of buttons like Save assessment and Load Assessment.
Use more intuitive terminology or provide tooltips for actions
Users could not spot the export button until the second trial.
Reposition or visually highlight the export button to ensure it’s easily discoverable.
Design
Process
We followed a iterative (and chaotic) design process.
3
Rounds of iteration
3
HCI Professionals
1
Expert Product Designer
2
CS Professionals
Iteration 1
Top Findings
Design Implications
Participants were not sure about the difference between this number of test categories and total number of tests.
Only display total number of tests to reduce confusion.
Participants were confused about status indicators for each test category.
Differentiate the visual representation of overall assessment progress and individual test pass/fail status
Participants liked that test categories on overview were clickable but still thought information was overwhelming.
Only display pass/fail status.
Participants understood meaning of buttons but were not sure about the difference between Save and Export.
Adjust the hierarchy of 'Save' and 'Export' actions to better convey their purposes
Iteration 2
Top Findings
Design Implications
Participants preferred the original bar graph for representing this information.
Improve visualisation for pass/fail status of tests.
Three columns for test categories were still overwhelming.
Display test categories in two columns.
Product Designer and Product Manager indicated that while priority of tests is useful, it would not be feasible.
Incorporate a visualisation which is feasible and helpful for NADs for accessibility testing.
Participants didn’t recognize progress indicators on side navbar and felt the space was too cluttered.
Simplify the progress indicators and brainstorm alternative way for travelling to a particular test.