Let’s talk about evidence. I’ve been working on several funding proposals lately, as is the plight of the final year PhD student. Some of these proposals are based in Canada, some in the US, and some in the UK. I am getting well-practiced at putting quite a bit of effort and very little stock in any of these applications panning out. I accept the reality of the funding environment, and acknowledge how little most people care about eating disorders—so I need to work quite hard to convince people that eating disorders do, in fact, matter.
Convincing people that eating disorders matter is a tricky game. It’s kind of like attempting to make something that is vegan, gluten free, and made with “natural sugars” grab your tastebuds. Difficult, but not impossible. And no, the irony of that simile does not escape me.
One of the things I have noticed about funding applications in the States in particular is that if you want people to care about eating disorders, something they care very little about, you’d sure as hell better use the methods they’re used to. There had better be some numbers behind your observations, or you might as well not bother. Obviously, this perturbs me.
Call me easily perturbed if you will, but I’m a firm believer in methodological innovation. Unfortunately, it doesn’t seem like people want to take the double risk: risking engagement with eating disorders, which are still framed as disorders of the young, white, vain, able bodied, and woman; risking trying something new with methods… or something very old, applied to the research environment. Like, for instance, talking to people. It shouldn’t be a radical idea. Asking to people about their experiences is about as risky and “out there” as sticking a bunch of plums in a dish and pouring maple syrup over them and expecting it to taste good. Oh wait…
I have a very different definition of “evidence based” than many in the research community, eating disorders or otherwise, and place a very different premium on the importance of numbers to back up what we are saying. This is becoming clear the more I compete for the limited funds dispersed to study social phenomena, and the more I roll in eating disorder advocacy circles; both places tend to strongly value “hard science,” and see qualitative approaches as a handy little afterword.
There is a time and place for numbers. I have nothing but respect for quantitative researchers, and I do believe that it is possible to do good quantitative research. Numbers are compelling, and stats can even (GASP) be fun and innovative in and of themselves.
But numbers do not tell the whole story, nor are they inherently better for describing social phenomena, eating disorders included. They are simply different.
I have seen terrible quantitative studies on eating disorders that replicate the same errors that have pervaded the field for years. I’ve seen reliance on scales that we discovered long ago have fundamental flaws in their psychometrics (i.e., they don't measure what they are supposed to be measuring in the same way for different groups of people). Many of the scales used to determine levels of pathology and outcomes for eating disorders are built on the very assumptions many using those studies for advocacy argue against, for instance the idea that eating disorders are primarily tied to poor body image. Most of the scales were designed on white, Western populations, often clinical. A significant chunk of studies on eating disorders are conducted with people with anorexia, in clinical settings.
As I write this, I fear that my words will be taken the wrong way. I have friends and colleagues who conduct quantitative research. I have read compelling accounts of eating disorders that use quantitative methods; I have cited statistics in funding applications and research studies. It is also worth noting that I’ve read terrible qualitative studies on eating disorders. I’ve seen studies include samples of people fitting a pre-determined set of criteria for recovery that claim to be exploring a definition of recovery. I’ve seen thematic analyses where the authors do little analysis and more counting of mentions. I’ve seen authors publish five articles that say essentially the same thing. I’ve seen claims of novelty on research that was done twenty years ago.
As I write this, I fear that I invite a gaze on my own work wherein it too will be criticized for not being innovative enough. But that’s just the thing: who defines innovation? So often, research funding is little more than a game of politics; whose work is en vogue? Whose methods align with the dominant spirit of the times? Whose name do the reviewers recognize and trust? What is the political funding climate under the party in power?
It’s important to acknowledge the fundamentally political character of the research enterprise because it helps us to recognize that science is never neutral. Every day, some voices are amplified above others due to funding and publication biases. Sometimes the most compelling “evidence” comes from someplace else entirely, but it doesn’t have the sheen of scholarly communication. Non-scholarly environments are, of course, no less prone to politics and biases.
But we simply cannot say that stories are not evidence; of course they are. (So maybe let’s listen).