Originally posted on October 19, 2023 on Salesforce.com, Kate Hughes
Your users are eager to tell you how to improve your product experience. Just ask.
A website can pass or fail usability testing – though it’s usually somewhere in the middle. There’s a difference between: Can they use it? vs. Can they use it, easily? Sometimes research-driven optimizations require a lot of effort. Other times, it’s just adding text in the right place on the page. This was the case recently for Kailee Quinn, a senior UX/UI designer at longtime Salesforce Nonprofit partner Fíonta. Coming out of usability testing on a nonprofit’s Experience Cloud site, she invested in low-effort optimizations that helped all users succeed.
She prioritized text changes on her onboarding flow, confirmation page, navigation bar, FAQ page and rollovers. So when it went live, users could do what they needed to do – easily.
This project didn’t warrant new flows, configurations or design systems. Instead, they worked with what they had to help users. “We changed the words a little and made a big impact. It doesn’t have to be an entire redesign every time,” said Quinn, a self-described “fixer type” who’s passionate about research.
Design research best practices
For an honest look at the site’s performance aptitude, Quinn wrote clear task scenarios and set up a remote usability test. Each scenario asked users to complete a hypothetical task and also allowed for flexibility in how to do it. Example:
- You know that one of the benefits of membership is the research that is available to you. Please navigate to where you think you would find the research documents in the community and think aloud as you go.
Hearing the users’ thought process was critical. Quinn watched recordings of potential users attempting to complete task scenarios, including:
- Logging in with SSO (single sign on)
- Messaging peers
- Replying to community posts
- Reading research studies
- Finding events
She watched their click paths. She listened to how they articulated their thoughts and calculated how long it took to complete a task. If something caused confusion, she reached out to the user to request feedback and additional discovery interviews. They happily shared any snags they faced and clarified their feedback. In the end, she compiled more than 50 optimization recommendations. The team prioritized what would make the biggest difference quickly. Text fixes topped the list:
5 text usability optimizations
- Onboarding text: To deter direct messages intended to be private from getting posted in the public forum, Quinn used the Experience Cloud text component to add an instructional one-liner at the top of the Groups template.
- Feature comparison text: Also in Groups, users couldn’t differentiate between a post, poll or question. Using the same text component, Quinn added guidance that compared the options. This allowed users to more confidently make a selection when asking for group feedback.
- Navigation text: Participants struggled to find the volunteering link in the main menu component. After some follow-up conversations with respondents, Quinn changed the link text to a more recognizable label.
- FAQ text: Creating a new custom page with Experience Builder is a snap when it’s all text.
- Info icon text: Participants couldn’t decipher some required fields. To rectify this issue, Quinn clarified the field-level help text that appears upon hover.
Failing with a subset of users in usability testing ended up helping everyone (solve for one, extend to many). When the site went live, users didn’t get stuck.
Research drives user success
Quinn credits UX research for helping teams be more proactive and less reactive.
“What I’ve learned the most as a UX designer is that I don’t know all the answers and I’m not supposed to know,” Quinn said. “I have to ask questions.” It might feel uncomfortable to ask for design feedback, but that’s normal.” Quinn acknowledges that people initially may experience vulnerability while watching people navigate their designs in usability testing. Some teams may simply not want to know what’s broken without the resources to change it. She gets it.
Ultimately, Quinn follows this perspective: “Technically something might work, but are its users struggling? Are there really simple things we could’ve fixed?” By doing this, she achieves her personal aim of making experiences easier and less painful for us all.