The ideal timing for valuable insights: The appropriate amount of design quality for user testing





One of SimpSocial’s guiding principles for product development is to “think big, start small, and learn quickly.”

 

Conducting evaluative research to gain confidence in the product design direction we’re following is one of the ways we learn quickly. In the early stages of product development, before any code has ever been written, we frequently try to present product concepts to users.

 

What is the appropriate amount of design fidelity to provide to your study participants? is a crucial and continuing topic that is raised by this user testing approach. How closely related to the final design must those concepts be for the people testing them to provide reliable insights? Finding the ideal balance between “polished but late” and “crude but early” can be challenging. Which degree of design abstraction strikes the ideal balance?

 

When discussing what to display to participants internally, we’ve found it’s beneficial to avoid using the word “design” and instead use the term “research stimulus.”

 

Words have value

First off, despite the fact that we discuss design fidelity, we’ve discovered that it’s useful to avoid using the phrase “design” when discussing internally what to display participants during user testing. Instead, we like to use “research stimulus.” While sounding a little strange, phrases like “research stimulus” or “provocation artifact” help participants understand that what you are about to show them is a tool to elicit ideas on an overall concept rather than specific design aspects.

 

Determine the issue

When developing the research stimulus, it can be alluring to employ the newest, most polished design files available, but there are expenses to take into account. Participants will focus on the highest level of fidelity you offer; therefore, hearing participants’ comments about the microcopy on a CTA is likely to dilute the input you’re searching for if you’re interested in whether a concept has value.

 

Instead, we’ve discovered that it’s beneficial, to begin with the issue at hand, or what it is that we are seeking to learn, and then match it up with the accuracy of the material we present to study participants during user testing.

 

Therefore, to choose the appropriate design fidelity, you must decide which of the following issues you are attempting to resolve:

 

Concept validation: Is this idea worthwhile?

Can users get a “good enough” grasp of this system? This is a test of the system’s design.

 

Can people use this product? Usability test.

The problem you are attempting to address will help you decide the degree of design integrity required of your research stimulus.

 

Clarifying the issue up front also aids in establishing alignment with your cross-functional colleagues in product, design, and engineering. This is particularly crucial because developing a research stimulus is a joint effort, and ownership can be murky.

 

“You’re not interested in their reactions in the early phases. You want to know if it would be useful to them.

 

Concept validation: Is this idea worthwhile?

The question, “Does this idea have value?” is what you are currently struggling with. Do customers believe that this concept will specifically aid them in resolving their issues? You don’t care how they respond to it in the early phases. You’re curious to know if it would be useful to them.

 

You should verify that the participant genuinely has the issue you’re trying to address during the exam itself, demonstrate the concept, and assess whether they anticipate it will provide a useful solution to their issue.

 

At this point, your test stimulus does not need to resemble an interface. In fact, it’s probably best if it doesn’t; if you do decide to offer ideas as an interface, be aware that your participants might immediately start to consider how they’d interact with it, how it would relate to the rest of your product, and what they think of the visual design. Participants will go toward this type of feedback since it is simpler to give than delving deeply into the notion itself and how it might solve the issue they are now facing. Your research stimulus only needs to convey the hypothesis you’re testing.

 

There are several acceptable solutions, including brief written descriptions, storyboards, landing page mockups, schematics in Google Slides, and doodles that resemble whiteboards. Keep it straightforward and focused by obtrusively employing placeholder visuals and blurring extraneous copy.

 

Can users get a “good enough” grasp of this system? This is a test of the system’s design.

A product team may occasionally be working on developing an entire system rather than just interfaces or flows. If you were conducting research at this point, your main concern would probably be whether your customers could develop useful mental models that would allow them to utilize the product without becoming stuck or confused based on what the team has so far conceptualized. The best proof of this is whether users can predict with accuracy how they would approach representative tasks in your system.

 

Even though you could just present consumers with abstract system designs and ask them how they would expect to interact with them, this lacks significant ecological validity. Instead, displaying stimuli that resemble user interfaces helps the research seem more realistic. Again, though, we’ve discovered that getting participants to engage with a prototype nearly always causes interaction design to take center stage, so you’re definitely still not solving the problem.

 

user testing in line with design fidelity

 

Ask them to describe how they would expect to solve real-world problems using your mocked-up interfaces as you share your screen and take care of the precise interactions that guide them through the product. This will prevent you from creating fully interactive prototypes before you are confident in the design direction or from running the risk of upsetting participants with janky interactions.

 

Can people use this product? Usability test.

At this point, you want to be able to assess if a user can engage with your product while receiving a realistic degree of assistance (i.e., no more onboarding than you’re likely to offer in the finished product) and perform tasks that correspond to the issue your product is meant to address.

 

Interaction design becomes increasingly significant in all different types of usability assessments. Once you are clear and detailed about the tasks you want your participants to complete and evaluate, and the interactions that are required to make them possible are supported in the prototype, you may test usability with prototypes of varying degrees of fidelity.

 

To sum up,

The distinctions between these phases are hazy, and choosing the best strategy for your circumstances might be challenging. Be prepared to change the questions you ask and the research stimulus from session to session.

 

Additionally, rather than focusing on the study stimuli, you may ask your participants how they felt about the research session as a whole. You need to do extra work if you feel confused or frustrated. And if you run into trouble, always remember to start with the issue at hand: precisely what are you hoping to gain from the user testing session?






No leads were lost. reduced overhead.
Swipe to setup a demo
Swipe to learn more