skip to content

Closed question response categories

5 August 2008
Photo of a pencil over a multiple-choice exam paper.

There are two main types of questions on form: open and closed. As defined by Bradburn, Wansink and Sudman (2004), “respondents answer open-ended questions in their own words” whereas closed questions require the respondent to select from a “limited list” of “predetermined categories”. These predetermined categories are what we refer to as response categories.

Some closed questions require only a single answer whereas others allow the user to choose multiple response options. In the electronic realm, questions for which only a single response option can be chosen often use radio buttons; questions that allow multiple responses use checkboxes (also known as tickboxes). (Drop down lists can be used in both cases, depending on their behaviour.)

Features of good response categories for closed questions

This article is going to outline the features of successful closed question response categories. We propose that there are 5 key features that designers should be striving for, as follows:

Appropriate
Specifically account for the main responses you are likely to get.
Complete
Ensure that there is a response option to suit each and every respondent.
Self-explanatory
The meaning of category names or descriptions should be clear to the respondent.
Mutually exclusive
Categories should not overlap.
Unbiased
Categories should not be skewed in one dimension at the expense of another (valid) dimension.

On first glance these seem like very obvious and sensible features to strive for in your closed question response categories. However, like many things in forms design, it's a case of easier said than done. The following examples illustrate just how difficult it can be to make sure that your response categories cover all the bases.

Appropriate

Knowing what response categories to use relies on understanding the concept being measured. To put it another way, we need to have a sense of what sort of answers people are likely to give, before we have even asked the question. This is one reason why a question might be asked in an open format initially (and changed to a closed format in later versions of a form): to get a sense of the range of common responses.

In our opinion, this is the aspect of closed question response category design that is usually done the best: it is rare to see an example where a really common option is omitted. Having said that, I was surprised to see that the account sign-up form on the Adobe (US) website did not include “designer” as a job title (see Figure 1). Given many of their flagship products are creative (e.g. Flash, Illustrator, InDesign and Photoshop), one would assume that designers make up a large proportion of their account holders. (Accounts provide access to tutorials, downloads, extensions and more.)

Screenshot of Adobe form showing available job titles.
Figure 1: No “Designer” job title for a form about products used mostly by designers? Not very appropriate.

In the Adobe case, designers might get miffed that there isn't a category especially for them, but they can at least choose “other”. More about this in the following section.

Complete

With regards to response categories, ‘appropriateness’ is about ensuring there are explicit categories for the most frequent responses, while ‘completeness’ is about ensuring that all responses are catered for. Completeness is important because not providing a suitable option for every respondent is likely to lead to frustration, unreliable results and higher non-response.

A sure-fire way to produce a set of response categories that are complete is to include an “other” option, which will ‘catch’ all the form-fillers that don't fit elsewhere. Not including such a category is a dangerous practice: problems will arise unless the designer has managed to think of every possible case.

Options for “other”

A question with an “other” option may or may not also give the opportunity for the respondent to provide more detail on their specific case. In the Adobe example, the form-filler does not have the chance to say that they are a designer; they are put together with all the other respondents who chose 'other'. But in the example in Figure 2, also from the Adobe account sign-up form, the field immediately following the closed question about state/province allows the form-filler to give further location information.

Screenshot of Adobe form showing drop down question for state/province and other specify write-in box.
Figure 2: These fields give the user a chance to specify their location when it has not been included in the existing response categories.

Obviously if Adobe is to send anything in the mail, they must have this information. However, the need for a “specify” field is not always so clear cut.

The advantages of having such a field are that it:

  • empowers the form-filler to give accurate answers
  • helps the form-filler feel like they have been considered in the design process (very important in a customer service or sales situation); and
  • allows people processing the form data to recategorise the response if the choice of “other” was incorrect;
  • enables the creation of new categories, for the purpose of data analysis (i.e. if there were popular responses not catered for in the predetermined set of options).

The disadvantages of having a “specify” field are that it:

  • adds to the respondent's workload; and
  • may lead to an increase in production, distribution and processing costs (because of additional length).

Regardless of whether or not a “specify” field is provided, maximising appropriateness of the response categories should mean there are proportionally few people who answer “other”.

When “other” doesn't fit

It doesn't always make sense to have an “other” category. For example, when the response options are points on a continuous scale—e.g. age, length of time in current position or income—having an “other” response doesn't really work.

In one respect, such questions shouldn't need an “other” response. Open end points—e.g. "65 years or over"—can allow the whole spectrum of the scale to be captured.

Yet such questions frequently still suffer from completeness problems, usually due to the fact that it is difficult to make discreet categories out of something continuous. In the example below, what answer should be given if the respondent has been a member for two and a half years?

Screenshot of question about membership length where the response categories are one year spans.
Figure 3: The response categories shown here are not complete, resulting in some legitimate answers ‘falling’ between the cracks.

Perhaps the designer assumed that people would round their answers, which may be true. The problem is, we can't be sure if all of the form's respondents will round in the same way, particularly when it comes to the mid-point. A person who has been a member for two and half years might decide to round down—because they haven't been very active—or round up because they want to look good in the survey results.

There is, however, a solution to even this problem. In the membership question above, the design would be improved by:

  • making the categories complete (e.g. “less than 1 year”, “1 year or more but less than 2 years” etc); or
  • giving an instruction about what respondents should do when they ‘fall in between’ categories.

Self-explanatory

Category labels, by their very nature, summarise the contents of the category. For example, we use the category label “nuts” to refer to a whole raft of things from almonds to walnuts. As anyone who works in taxonomy, information architecture, law etc knows, the simplified nature of category labels makes them prone to misinterpretation and long debates about their definition. Just think of the different things that the following (simple!) terms may or may not include, and this problem with categorising becomes clear:

  • “Child”
  • “Employed”
  • “Busy”
  • “Car”

As form designers we always strive to avoid ambiguity and category labels are no exception. However, what is self-explanatory to the form owner may not be self-explanatory to the form-filler. A good designer will therefore ‘step into other people's shoes’ to assess just how self-explanatory the response categories are.

Figure 4 illustrates a designer definitely not stepping into the users' shoes. In this example, the response categories revolve around the terms “metro” and “regional”. These terms are commonplace for people working in the Australian market research industry, and many such people which category is right for different cities across the country.

The same cannot be said for your average member of the general public: assuming they work out that “metro” is short for “metropolitan”, who knows how they would categorise places like Bunbury (WA), Toowoomba (Qld), Bendigo (Vic) or even Canberra, the nation's capital.

Screenshot of question about where the respondent lives.
Figure 4: For the target audience, these response categories are not self-explanatory.

Stepping into the users' shoes would have helped in the following example as well.

Screenshot of question about a web designer or developer's previous field.
Figure 5: In this survey, the term “technical” has a specific meaning, one that is not necessarily shared by all of the form's users.

It looks like for the owners of this survey, the term "technical" equates to the computing or creative space (web design and development is the merging of these two spaces). An engineer, ergonomist or psychologist may well consider their area of expertise to be "technical", yet none of the provided categories seem to work for such respondents.

Mutually exclusive

The opposite of having gaps between response categories is having overlap between response categories. In the following example, the response categories are clearly not mutually exclusive, yet the respondent is allowed to choose only one option. This leaves the form-filler having to guess things (like the intention of the designer and how the data is going to be used) so that they can attempt to give an appropriate answer.

Screenshot of question asking the main reason you contacted Ebay customer support, with overlapping categories like 'selling and 'PayPal'.
Figure 6: There is overlap in these response categories, yet the form filler is allowed to choose only one.

Unbiased

Our last consideration applies only to questions for which the responses categories are different points along a particular dimension (e.g. positive vs negative, happy vs unhappy, expensive vs inexpensive).

Consider Figure 7 below, for rating a web page. This set of response categories has only 1 negative option compared to 3 positive options, creating a bias. Even if you consider “satisfactory” to be a neutral, rather than positive, option, there are still twice as many positive options than negative.

Screenshot of question asking information to be rated, with response options 'excellent', 'good', 'satisfactory' and 'poor'.
Figure 7: This scale has a bias towards positive responses.

This is certainly not the worst case of bias that we've seen. But it's hardly worth gathering feedback if there's a predilection against criticism.

Users are influenced, in many subtle and complex ways, by the response categories they are given for such rating questions. Bias is just one problem: look out for a future article tackling this and other challenges face us when using one-dimensional scales as response categories (e.g. ensuring balance and use of mid-points).

Checklist for response categories

So to summarise, the next time you are developing a set of response categories for a closed question, you can ask yourself:

  • Is this set of options appropriate given what I know about the context?
  • Are there any gaps or overlaps between my categories?
  • Do I need an "other" option and if so, should it be accompanied by a "specify"?
  • Am I presenting form-fillers with a unbiased set of options to choose from?

References

Bradburn N.M., Wansink B. & Sudman S. (2004). Asking Questions. John Wiley & Sons, San Francisco, p. 153 & 156.