Different types of user testing and when to use them

It’s important to learn about various types of methods used for validating designs and using the best one as per requirement

Getting Started

User-centric design process emphasizes on integrating user’s feedback into the design. So, various methods are used at each stage of the design process to get user feedback.

Here is a list of the most commonly used testing methods with a breakdown of when to use them:


User Testing Method

When to Use?




In-person Usability Testing

These tests are conducted in users’ native environments.

When the design is in fairly mature form.

Basic functionality is in place, but the experience needs to be tested.

Get detailed insights

Get to observe users in their native environment

It’s expensive as the research team needs to travel to the user’s location

Equipment are required for setup and recording the tests


Lab Testing

It is conducted in a lab environment with a moderator who communicates with the user and observers watch how the users perform given tasks.

When a mature version of design is available, it needs to be evaluated in detail.

During the design phase 

To define the requirements of a redesign project

Get focused feedback

Good-quality data can be gathered

Takes time and effort to plan and execute

It is expensive

Lot of preplanning is required for inviting participants to the lab


Remote Usability Testing

Used when a design is present at least as a clickthrough prototype but reaching out users in person is not possible.

Cost-effective method

Success of tasks can be evaluated

Can't get qualitative info about users difficulties

Users need to be tech savvy or provided tech support to participate


Unmoderated Testing

Test run with an application, which tracks users participation on the tests.

During the design phase to get a lot of quantitative data on certain functionality

To test a beta version of a product for refinement before releasing to market

To get quantitative data to check usability of design

To measure the task success

It’s done with tools so monitoring not required

No qualitative data can be gathered

Can't get data on users’ reasoning or decision making


Guerilla Testing

Used to validate rough wireframes or lo-fi prototypes with users in a casual setting.

Used in the ideation phase.

Design idea needs to be in place.

Design should be put together.

High-level design validation can be done.

Relatively less work required for preparation.

Multiple, quick design iterations, and rounds of testing can be done.

Feedback on interactivity cannot be received.

Users might not be able to articulate and give feedback on finer aspects of design.


Automated Accessibility Testing

Can be tested only when product is in place and can be run through automated test

Correct choice of tool for testing.

Cannot be done with a prototype.


A/B Testing

Used for evaluating 2 variations of a single functionality.

Note: This method is NOT used for testing different versions of the design or so.

It becomes easy to decide on one solution out of the two.

Only a single feature or functionality could be evaluated.


First Click Testing

It is one of the shortest testing methods where users first click on a digital product that needs to be checked.

Used to test on final design or live version of the product.

Simple method 

Minimal arrangement required. 

Qualitative data or more insights cannot be gathered.



Keeping a track of what users are seeing most on a UI.

It can be conducted on a live product only. 

Used when it needs to be checked if users are reacting to specific parts of the UI as expected.

Can be used to see if users are behaving as expected on a specific screen/functionality

Done with software, no human intervention required

Can be used for evaluating a few UIs.

Only quantitative data can be gathered.


Mouse tracking & click tracking

Keeping a track of what users are doing mostly on a UI

Can be conducted on a live product only.

Used when it needs to be checked if users are reacting to specific parts of the UI as expected.

Can be used to see if users are behaving as expected on a specific screen/functionality

Done with software, no human intervention required

Only quantitative data can be gathered.

Derived insights from the data can be very subjective. 


Card Sorting

Used for evaluating IA during design phase, since it does not need any prototypes of UIs.

Could be done in the pre-design phase as well. 

Simple method

Basic preparation is required

Can only be used for testing IA.


Tree Testing

During the design phase, when the navigation schema is defined.

Needs some basic preparation of a prototype for testing the navigation elements.

If there are some major issues identified, it implies rework of design.


Heuristic Evaluation

When product/design is in place but needs to be evaluated by self without engaging with users/ customers.

Helps identify problems with specific aspects of the design

Is cheap and easy method

Can’t get insights on reasons behind problems.


Cognitive Walkthrough 

When a design/product exists and needs to be evaluated from the user’s perspective.

It is done by an expert posing as the user.

Gives qualitative pointers 

Time and cost effective

Can’t get deep insights but good to validate what and how users are doing on the product

May give subjective results


Design Audit

Evaluating an existing version of design by going through it with stakeholders

This is used when the design is still building up and not ready to go out to users for testing

Time and cost effective

Can be done before design is finalized

Not in depth

Might not bring in users’ perspective

To know more about the testing method and its relevance in the design process, refer to How to conduct research and capture actionable insights

How to?

Any user testing method is conducted in the following steps:

  1. Goal: Define the goal for which the test needs to be conducted.

  2. Method: Select the most appropriate method based on your need, stage of design process, available resources, etc.

  3. Prepare: Plan the test, prepare, schedule, and script out the test plan in detail.

  4. Recruit: Find the most suitable participants for the selected method.

  5. Run: Run the test as planned and gather data.

  6. Synthesize: Synthesize the data received from all the tests and extract pointers for the rest of the design process.

Do’s & Don'ts



1. Select the right user testing method.

2. Prepare well.

3. Select appropriate users.

4. Have a clear intent for the test activity.

1. Don’t do random testing to expect any result out of it.

2. Don’t choose random people as users.



  • Project 

Suggested Tools 

For Preparation & Planning

    • GoogleDocs

    • Cubyts

For Conducting Surveys

    • SurveyMonkey

    • Typeform

    • Google Forms 

For Usability Testing

    • UserZoom

    • UserTesting

    • Usability Hub

    • Dscout

    • LookBack

    • Optimal Workshop

For Research Synthesis

    • Miro

    • Mural

    • GoogleSlides

    • GoogleDocs

For Sharing Documents

    • Google Drive

    • DropBox


Other Related Best Practices

  • User Research Basics 

  • How to do a Design Audit

  • How to conduct Usability Testing

  • How to conduct research and capture actionable insights