Should you require survey responses?

When I was first learning how to design surveys at the University of Pennsylvania, it was standard practice to code our surveys so that we “forced” responses. This means that when we programmed our surveys into our platform of choice (spoiler alert: it was Qualtrics), we’d check a little box that allowed us to require a response: If a question was left unanswered, the person wouldn’t be able to move forward in the survey. We were conducting studies, and letting people skip questions could lead to missing data.

Years later, I worked with an organization whose standard practice was the exact opposite: they never forced responses. Their reasoning? Forcing people to answer created an unfair power dynamic. It potentially made people share information they might not want to.

I hadn’t thought of it like that before, but I understood where they were coming from. From an equity perspective, I can see the argument that it’s not fair to force someone to answer a question. In studies with consent forms, participants are allowed to opt out at any time. Shouldn’t surveys be the same? There are best practices aligned with this idea, like including an option for “I prefer not to answer” on sensitive questions. But that's not as extreme as allowing someone to opt out of any and every question. It’s an interesting debate.

From my research brain’s perspective, this organization's policy worried me. If someone skips a question, I don’t know why. Did they feel their answer didn’t fit the options provided? Did they just not want to answer? Did they miss it by accident?

We reached a compromise and turned on a feature called “Request Response.” If someone skipped a question, they’d get a little pop-up saying something like, “Hey, you missed this question. Do you want to go back and answer it?” But if they didn’t want to answer, they could just keep clicking through and move on.

I thought about all of this recently when I was returning an item on Amazon. I had bought the wrong size and was exchanging it for the right one. If you’ve ever returned something on Amazon, you know that, like many other online stores, they require you to take a survey (and, hey, I get it, that's some helpful data!). The first question was simple: “Why are you returning this item?” I selected “wrong size” and moved on. Then came an open-ended question: “Is there anything else we should know about this item?”

I didn’t have anything else to say, so I skipped it. Or at least, I tried to skip it. The survey wouldn’t let me proceed until I answered. It was forced response! Annoyed, I had to type “No” into the box just to continue with my return.

This is undeniably bad survey design. That open-ended question was essentially a yes/no question. They could have easily had a radio button (those are the questions you see with a circle you can click next to them) asking, “Do you have anything else to share about this item?” If I selected “yes,” I could then provide details. If “no,” I’d be done. Instead, they forced me to type into a text box unnecessarily. This goes against another standard survey practice: never requiring an open-ended response. It's just asking too much of the person taking the survey—even researchers draw the line there. We won’t make you type an essay!

In this case, the forced response wasn’t just unnecessary; it actively made the experience of the person taking the survey (me) worse.

Good surveys demand a good survey taking experience.

They show that you’re listening and that you care about the people taking the time to share their feedback.

This Amazon survey? It missed the mark. Forced responses *may* have their place in research (I'm still rethinking my position), but they must be used strategically—Otherwise, they risk putting off the very people you’re trying to learn from.

What do you think: force, request, free for all?

Previous
Previous

ICYMI: I Launched a Podcast for My Birthday! 🎉

Next
Next

Giving Thanks / Writing questions that invite people in