I’m a researcher, and I wouldn’t take “research finds” at face value.

(Prefer a video version? Part 1; Part 2)

We’re living in an age when misinformation proliferates. While we are getting better at policing “fake news,” what about the more subtle sources of bad information that are touted as fact? 

People cherry-pick stats, misrepresent findings, or simply don’t do the requisite digging into a source. As a public, we’re presented with an attention-grabbing headline or a number completely lacking in context. We don’t get the full picture in an accessible way.

That’s a major problem.

There’s pressure on us to be critical consumers of information. That is, to not take numbers or “truths” at face value. We’re told to interrogate the source and to contextualize results. 

We’re being asked to do someone else’s homework. 

In this case - we should. Headlines proclaiming “research finds” and “study shows” give people carte blanche to share whatever information follows as fact when instead they should give us pause.

Let’s look at an example that’s ripped from the headlines. 

Last month, the following article appeared in MarketWatch: “Being a CEO is a deadly affair, new research finds.” 

Alarming — and attention-grabbing. 

If you keep reading, you’ll learn that researchers found “CEOs can die up to 1.5 years earlier — when their companies are at risk of being taken over.” 

Only two sentences in, but already the title starts to look misleading. 

Continue, and you’ll learn that researchers came to this conclusion after “looking at the births and deaths of more than 1,600 chief executives.” 

That seems like a reasonably large number at first, but you won’t find any meaningful information about these CEOs mentioned in the article. Important qualifiers like: 

  • What industry? 

  • How do they identify in terms of gender? Race and ethnicity? 

  • What were the causes of death?

If you were to go digging for the scholarly publication, you’d learn that this finding comes from “large U.S. firms included in the 1970-1991 Forbes Executive Compensation Surveys." 

Did you catch that? The study focuses on firms included in a survey that dates back to 1970 — that’s 51 years ago. Reasonable, given that the context of the study is anti-takeover laws that largely occurred in the 1980s.

But, do you still think “being a CEO is a deadly affair?”

Let’s take a more complicated example: a study produced last year by a reputable nonprofit consulting group, The Bridgespan Group. Check out The New York Times headline: “In Philanthropy, Race Is Still a Factor in Who Gets What, Study Shows.” 

It’s attention-grabbing, and it certainly tracks with what I’ve observed in my work with funders but does it paint the whole picture?

If you keep reading or dig into the study, you will see:

  1. The study has a limited sample. The analysis is of one funder’s applicant pool. It isn’t accurate to conclude that one organization, even one of the most competitive organizations, represents the entire philanthropic space.

  2. Correlation doesn’t imply causation. Finding racial disparities in funding doesn’t tell us why the disparity exists — that is, it doesn’t prove that the disparity in “who gets what” is due to race. Neither do the comments from “interviews with more than 50 sector leaders.” Could a study prove this? Yes. Does this study? Probably not - it’s hard to definitively say without a clear section on research methodology.

A more apt title might be: “Funder finds racial disparities in revenues and assets of its highest qualified applicants” or “Here’s what sector leaders have to say about racial disparities in philanthropic funding.”

These are but two of many examples of irresponsible reporting of information. I’m not arguing that the findings reported by MarketWatch or The New York Times don’t have merit. They do. I am saying there simply isn’t enough information or context presented to validate the conclusions being reported.

People regularly overstate findings and extract themes and conclusions from results that are at best misinterpreted or limited, and at worst, from poorly designed studies.

Overstating findings or proselytizing with decontextualized information is harmful — it slowly and steadily waters down truth and trust in science, risking it becoming another biased tool anyone can pick up and use to bolster an argument in their favor. Perhaps it’s not fun or sexy to dig into and clearly lay out the who, what, when, where, and why of “facts,” but it is important. 

If you take a peek at peer-reviewed publications, you’ll find that many academic researchers are careful to do just that - they tell you where their data came from and report the context of their findings. Scholarly articles are rife with hemming and hawing, caveats, and limitations. 

Yes, that can be frustrating. I understand the desire for a clear answer, but the truth is research rarely has the answer. Rather, it provides information that, when properly contextualized, can be useful to consider.

I dream of a world where we have a nutrition label for facts. Whenever we read a stat or finding, we should have easy access to a quick run-down of its history in plain language. It’ll say: Here's who we talked to. Here’s where they’re from and some important details about their background. These are the questions we asked. This was when we collected the information.

It’s tempting to overgeneralize or overstate research findings — it’s more sensational, you’ll hook the reader’s interest. It’s also irresponsible. Anyone that reports “facts” or references “research” has a responsibility to be transparent, ethical, and comprehensive in presenting the information.

The headlines might not be as snappy. But what’s more important: sounding good or telling the truth?

Previous
Previous

How to Write Great Survey Questions | Makeover for Local Government Agency

Next
Next

Question Makeover for a Weather Channel Pop Up