Is nuance a good thing for surveys?
Survey design can feel like a balancing act. On one hand, you want to be specific enough to get useful, actionable data. On the other, too much nuance can overcomplicate your questions, confuse your audience, or make your data harder to analyze.
It’s a tricky line to walk, but here’s the key: Nuance and specificity are not the same thing. Specificity is about clarity and precision—getting exactly what you need. Nuance involves subtle differences, which can sometimes add value… or sometimes just cause headaches.
Just a few weeks ago, I saw what I would consider a fairly common question that demonstrates exactly this dilemma!
After a virtual call, I was asked to rate the quality of the call with these answer choices: awful | bad | okay | good | great. (Yes, there were also emojis involved. That’s a topic for another day.)
Now, in my opinion. This is a lot of nuance. Why do I think that? Let's break it down:
Awful and bad are synonyms. I assume awful means “super bad” because it’s arranged to the left in the answer choices, which typically range from worst to best. But is that distinction actually meaningful?
On the other end of the scale, we have a more subtle, but still sticky, issue with good and great. In everyday language, I get the difference—good is fine, and great is more like exceeds expectations—but how would different people interpret that here?
Okay versus good feels equally vague. Is “okay” one step up from bad or is it neutral? (ugh, neutral).
When the distinctions between your options are too subtle, you risk confusing your audience. People might hesitate, guess, or choose randomly—making it harder to analyze the results.
In my humble opinion, you don’t need that much nuance. What if it were simplified to:
• The quality was bad.
• The quality was okay.
• The quality was great.
That’s enough to give you useful information, if the question itself weren’t so vague. This company doesn’t just need less nuance—they need more specificity.
If someone says the call quality was bad, what are you going to do differently? This is a one-question survey, so you don’t have enough details to act. Was it the audio? The video? Something else entirely? Without follow-up, you’re left guessing.
Here’s a suggestion: Instead of asking for an overall rating, try a checklist:
Tell us about the call you were just on (check all that apply):
⬜️ Everything was great.
⬜️ I had problems with the audio (e.g., poor sound quality, delays).
⬜️ I had problems with the video (e.g., freezing, low resolution).
⬜️ Other, please describe: ______________.
This approach is more actionable, more specific, and—bonus—easier for people to answer.