Could you ever think that just by changing a single button on its checkout page, an ecommerce retailer managed to swell their annual sales by extra $300 million? Though hard to believe, it’s a real story. Looking for the reasons of lavish cart abandonment rates, the retailer gambled on usability testing and proved correct.
The major finding was to change the annoying “Register” button for the one saying “Continue without registration,” which the UX design team happily did, only to see a 45% growth in the number of customers that brought new revenue.
Usability testing repeatedly proves its paramount importance, especially for traffic-dependent B2C web and mobile applications. This testing discipline helps find and fix problems hampering the conversion rate, verify concepts, and even get some inspiration from the target audience.
Currently, there are a plethora of UX testing methodologies but all of them agree at exploring target audience behavior with respect to the following aspects:
Although widely acknowledged, these methodologies often limit UX testers’ capacity to spot and analyze business-critical inconsistencies and under-delivery on user expectations. Calling them business-critical here is no exaggeration: as you’ll see in the next chapter, overlooking the importance of well-oiled UX may wreak havoc quite literally.
To avoid that, we offer our usability testing advice that comes from years of Itransition’s own practice. This approach will definitely require more of your commitment to UX testing, hence the extra mile. But, it is also much more likely to result in stellar UX and UI, all to your audience’s satisfaction and to your business’s higher gains.
Usability testing can avert some really nasty situations that go beyond just revenue loss. In one recent infamous example, the Hawaii Emergency Management Agency’s employee sent off a real state-wide missile attack alert instead of the test one like it was supposed to be.
Looking closer at this case would reveal inherent drawbacks in the alert system design that encouraged such a human error. In his analysis of this outstanding mistake, Sean Dexter pinpoints poor menu navigation and no options to revoke a faulty message or at least to amend it with a clarifying follow-up.
Public sector agencies don’t necessarily recognize how critical software QA services are for their operations unless something like this happens. Luckily, companies from many other domains have long embraced usability testing as a means of improving their products and reaching higher bars of customer service.
This brings us to the first of our usability testing tricks that can help software product owners leverage their offer and avoid hitting the headlines for all the wrong reasons.
Usability testing should become an integral part of the entire product development cycle. Getting a UX testing team on board since the very beginning is a sure way to avoid costly reworks further in the project. At the same time, usability test findings themselves can suggest quite a few ideas on how to create a competitive user experience that would stand out.
Itransition’s own project perfectly illustrates this.
Some time ago, Itransition was approached by none other than adidas to help them redesign their innovative miCoach application.
The project aimed to bring the sportswear giant to the forefront of the fitness wearables industry, but its success heavily depended on how users would get on with what the devices had to offer. Needless to say, usability was at the top of the brand’s priority list.
This is where our mature quality assurance framework came in handy. We integrated usability testing into the full-scope QA processes that also touched on functional and integration testing. Our usability testing team was active throughout the project, working side by side with the developers from the initial design specification drafting to the actual release of the revamped app to the market.
You can read the full case study here.
There is a solid conviction that your target group’s representatives are the best reporters on the product’s usability. This can work well when all you need to do is to kill two birds with one stone—to find UX bottlenecks and collect opinion from your target users all at the same time. However, this is where usability testing and market research get a bit mixed up.
One of the most paradoxical usability testing tips that you might get is this: it is about catching and eliminating usability defects and not about understanding your target audience. It means only one thing: you don’t have to involve mainstream users to find and fix your product’s usability problems.
The best-kept secret of usability testing is the extent to which it doesn't much matter who you test. For most sites, all you really need are people who have used the web enough to know the basics.
This leaves you with a much broader choice of your sample group for usability tests. For example, if we take a consumer electronics online shop, the elderly and teens located on the margin of the target audience would be a valuable source of insights for usability research. Their age-specific physical, sensory and cognitive abilities could uncover the trivial mistakes you would never catch with your mainstream users.
Likewise, it would make sense to engage a versatile public with diverse professional backgrounds and experiences. You can even go as far as to invite potential haters of your product. Such a nonintuitive approach can bring its perks: you are likely to reveal unexpected UX bottlenecks and even expand your audience by making the interface simpler, more logical, and accessible for everybody.
It’s true that recruiting such participants can be challenging sometimes but professional usability testers will know how to go about it. One way to do this is to find the audience right out there—literally by going to shopping malls or other high-traffic venues. Give visitors your software to test and ask them to describe the user experience off the cuff.
This method is called guerilla usability testing and is ideal if you want to get the first-hand opinion of people who probably have never heard about either your brand or your app.
It worked in this Zara app example, where an independent professional working on usability testing recommendations took it to the retail chain’s showroom and ran guerilla tests with random passers-by. This seemingly simple procedure, a field study, in fact, brought a whole lot of insights on the app UX improvements, such as of the shopping cart photo gallery and order editing.
The majority of literature on usability testing suggests using rather rigid contextual inquiries and fixed-form surveys of different types as the most effective testing tools. Yet, there is a huge gap between what people say and what they actually feel when they are aware of being studied. In social psychology, this phenomenon is called the Hawthorne Effect—the aspiration to improve one’s behavior when under observation.
Unnatural settings, close observation, and straightforward questions make people feel strained and give misleading answers. Moreover, they usually can’t explain their motives clearly enough.
For this reason, one of our usability testing tips is to prioritize participants’ natural behavior and uncontrolled reactions when making any conclusions about your product’s usability. Usability research is an inherently artificial process, but it is likely to give better results when it is as close to reality as possible. The strains of usability testing can be relieved in the following ways:
Hide the cameras, take as little notes as possible, and use sound recording instead of being present in the testing room. If possible, let users perform tasks alone in an isolated room with no observers whatsoever.
Here, users will be going through pre-designed automated scripts. Such tests are great for collecting large samples of data and are effective when done remotely. So this could be your option if you’re looking to get as many responses as possible quicker and at comparatively lower costs than in the case of moderated scenarios. All while making zero interruptions in the natural flow of users’ interaction with your product.
People feel far more comfortable working in their familiar environment and using their own devices. Observing users at their workplace or home eliminates a lot of lab-associated artificiality.
One of the branches of this approach is the ethnographic study used to understand how a user’s context, including their location and general surroundings, influence their experience with the tested product. One of the positive side effects of such research is that you can tap the waters and see whether there is a potential demand for your product in a certain location.
When you do go for moderated usability tests, it’s crucial to ask the right questions.
With moderated tests, you can benefit from UX professionals keeping track of the test course and making their expert conclusions about the subject matter. Yet, it’s important to maintain a healthy balance between invisibly guiding test participants and telling them what to do.
In this regard, the usability testing tricks would be the following:
Probably, tracking users’ actions is the most reliable way of estimating whether your message reaches users. There are quite a few options to perform it, and the usability testing advice here would be to combine them for a more insightful research.
Evolving from click tracking, the method is about capturing the user’s eye movement during the interaction with a website. As a result of such tracking, UX testers generate so-called heat maps demonstrating the degree of human sight fixation on each particular object. All you will need for eye tracking is just a web camera and specific software.
A sub-category of the click tracking method is first-click testing. It’s simple yet effective when you want to discover the first eye-catching element on a page or a screen and compare it to the one intended by the UX designers. If you pinpoint some discrepancies, this in itself would be food for thought.
First-click tracking goes well with blur testing. To conduct it, usability test participants are shown specifically blurred images of a website or an app where colors and forms are the only visible elements. The test is about checking if the users can point to the right components, such as a call-to-action button, by tracing down just its basic contours.
Though not directly involving tracking, this quick memory testing can say a lot about the efficiency of the information architecture and attractiveness of the UI design.
Just show a screenshot to the participant for five seconds before it is removed. Then ask the person to recall which elements stood out most, quickly and without thinking it over. During the five-second observation, the participant will be able to get a glimpse of your product and provide valuable feedback on the layout and most eye‑catching elements.
The final step of an effective usability testing project is always to run repetitive tests after each improvement to UX/UI is introduced. Benchmarking UX variations against each other to track changes in user behavior and experience is necessary to validate the test findings and track progress.
Certainly, usability testing is a vast discipline that needs a whole series of articles to cover. Yet these usability testing recommendations sourced from dozens of Itransition’s real projects can help to go beyond the common model and discover new insights.