Do you worry that AI will take your job? I don’t.

Do you ever worry that AI will take your job? I don’t. Here’s why.

Yesterday, for fun, I asked ChatGPT to write a 5 question survey for me. More specifically, I asked: I hosted an event last week and would like feedback from attendees. Can you write me a 5 question survey I can send out?

It was a quick and basic question. Lacking my usual preference for instruction and specificity. That was purposeful. One thing I’ve noticed in my years of working with people on surveys is that few — outside the realm of research — are sitting down and really taking their time to craft careful questions. It seems far more realistic (to me), that someone would throw ChatGPT a breezy prompt like this one.

I envision the scenario as one of the following: 1) Oh, I’ve been asked to create a survey for this thing and I have no idea what to ask so/don’t have time to do it, I’m going to have ChatGPT do it, or 2) I need a survey and I don’t know where to start (or I don’t have time, etc. etc.) so, I’m going to have ChatGPT do it.

That’s what people are doing these days, right? I’ll be the first to admit that ChatGPT writes a first draft of any uncomfortable emails I need to send.

So, how did ChatGPT do with writing this 5 question survey?

Unfortunately (or perhaps fortunately given my chosen profession), not well. Generally, I think, especially given the lack of instruction, it did hit the big buckets of feedback pretty well. These were the categories it proposed:

  1. Overall Experience

  2. Content and Relevance

  3. Organization and Logistics

  4. Engagement and Interaction

  5. Suggestions for Improvement

Not bad. But, what did the questions look like? High-level:

  • 3 out of 5 were double barreled (they asked about two different things at once, which is a big survey no, no)

  • 2 out of 5 included a neutral response category that didn’t make sense given the question (maybe it’s just me, but there are some things that you can’t be neutral on — like that event was relevant, or it wasn’t.)

  • 1 (possibly 2) lacked so much specificity that anyone who used it would miss out on an opportunity to collect meaningful data they could actually use.

So, to put it bluntly: I was not impressed. ChatGPT does not know the basics of good survey design. Let’s take a deeper look at ChatGPT’s questions…


Overall Experience:

How would you rate your overall experience at the event?

  • Excellent

  • Good

  • Fair

  • Poor

KLH: This isn’t necessarily a bad question. I have a personal preference for asking about individual aspects of an event; however, I didn’t give ChatGPT too much to go on here. Answer categories - candidly, I would just never use these. I would be more likely to ask this with an agree or disagree likert scale alongside other questions about the event. Or, I might ask how likely they are to recommend the event to a friend or colleague (a la the Net Promoter Score (NPS)). Lots of possibilities :)

Content and Relevance:

How relevant and informative did you find the content presented at the event?

  • Very Relevant and Informative

  • Somewhat Relevant and Informative

  • Neutral

  • Not Very Relevant and Informative

  • Not Relevant and Informative at All

KLH: Here, most importantly — content could be relevant and not informative or informative and not relevant to me. This is the definition of a double barreled question. They could’ve saved themselves by complicating the already complicated answer choices and adding in things like “very relevant but not informative” but they didn’t and yikes, I wouldn’t recommend that in this instance! Also, in my opinion, a great example of something you can not be neutral on. It’s relevant or it’s not. Convince me otherwise. If I were asking a question like this, I would definitely do something along the lines of rating relevance and usefulness (separately) perhaps Not at all, A little, Somewhat, Very. You could have someone choose or rank their favorite sessions. Again, lots of options.

Organization and Logistics:

How satisfied were you with the organization and logistics of the event (e.g., registration process, venue, scheduling)?

  • Very Satisfied

  • Satisfied

  • Neutral

  • Dissatisfied

  • Very Dissatisfied

KLH: Now we arrive at the question that is needlessly lacking in specificity. Is it double barreled? Yes. But, even worse — it’s asking people to rate things like registration, venue, and scheduling all at once WITHIN the double barreled question. Certainly, registration could be horrible but the venue amazing — you would want to ask about each of these logistical aspects of an event separately in order to get actionable information. Again, in my opinion, the neutral option is useless here. And, if I’m being picky - I hate asking about satisfaction (that’s a whole different newsletter), I would prefer to rank or rate or make this into an agree/disagree question with individual statements.

Engagement and Interaction:

How would you rate the level of engagement and interaction during the event (e.g., opportunities to ask questions, participate in discussions)?

  • Excellent

  • Good

  • Fair

  • Poor

KLH: I won’t waste your precious time, same issues as the question immediately above and again with one of my least favorite answer lists of answer choices. Could you ask about overall engagement or interaction? Sure. But, if there were specific opportunities such as: participating in discussions or a Q&A, you’d be better off taking the opportunity to ask about them separately so you can get actionable information.

Suggestions for Improvement:

What suggestions do you have for improving future events? (Open-ended)

KLH: This is okay - but note that qualitative responses are more difficult and time-consuming to analyze, especially the more responses you have. This could easily be converted into a more actionable close-ended question that asks people what changes they’d like to see and provides a list of options for them to choose from.


All-in-all, I’m not worried about ChatGPT replacing me. I AM worried that with the growing popularity of AI and using AI to do work that you don’t want to do or don’t know how to do, more people will turn to places like ChatGPT to design their surveys. Which means, we are going to see a lot more bad survey questions, particularly data that is not actionable. Bummer. Too bad ChatGPT can’t sign up for my course :)

The takeaway for me is this: If you want to use ChatGPT for survey design, it seems better suited to suggest topics to cover on your survey. Perhaps if you gave it more instructions, you might get better questions. If you try it out, let me know!

Previous
Previous

Survey Design 101: Why You Should Never Use Overlapping Answer Categories

Next
Next

Survey Design 101: Why You Should Test Before Launch