07 January 2011

Statistics, antidepressants, and suicide

Statistics is hard. At this point the, shall we say, flexibility of statistics is well worn (even high schoolers have heard that there are, "Lies, damn lies, and statistics."). Why statistics is hard is an important, but rarely asked question. One of the reasons is because statistics sits at the intersection of quantification and inquiry. Anyone can count how many M&Ms are in the package, but first they have to ask how many there are. In any statistical venture it is very important to keep one foot firmly rooted in this initial inquiry, no matter where the mathematical convolutions take you. Which leads to the question: Is an increased suicide rate a side effect of antidepressants? The FDA think so. But they're wrong.

To begin, let us examine two hypothetical studies investigating whether antidepressants increase suicidality. In study #1 we tested drug A and found that those taking the drug committed suicide at twice the rate as on the placebo. In study #2 we found that subjects taking drug B committed suicide at one half the rate of those on the placebo. Is this clear evidence that drug A is more dangerous than drug B? If you're the FDA, it is.

To illustrate the problem here lets say that each arm of each study had 100 patients. In study #1 two patients taking drug A committed suicide and only one taking the placbo did. In study #2 there were three patients taking drug B who committed suicide except that six patients in the placebo group committed suicide. The placebo group, who do not receive the drug are the ones determining the safety of these drugs (if you're scoring at home A is Paxil and B is Prozac).

In absolute terms drug A's arm of its study suffered fewer suicides than drug B's (2% vs 3%), making it safer. Except this is the opposite of the result we got when we examined the studies independently of one another. This is why it's so important to focus on what question your statistics are answering.

To complicate matters further actual suicides are incredibly rare in real antidepressant trials. This is partly because most trials only last 4 to 6 weeks, but mostly because anyone showing evidence of suicidal tendencies is excluded from the studies at the outset as a necessary condition of the trials. It's not really practical to give people who are actively ideating suicide a drug that is experimental, much less a placebo. It's even harder to get such a study past the medical ethics review board, so those folks are excluded from antidepressant studies.

Owing to this, clinicians use suicidality as their metric instead. Suicidality generally includes suicidal ideation or planning as well as self-harming behaviors. These events are more common than actual suicide and thus much easier to catalog and compare. In a large meta analysis conducted by the BMJ they found that 0.54% of patients in clinical trials expressed "suicidality." Helpfully, they even break down these incidences by indication group. If you do the math (they did not) 64% of the suicidality events were accounted for by those with major depressive disorder (MDD) and a further 28% had a condition categorized as "other psychiatric" (which explicitly excluded non-MDD depression). Completely unsurprisingly research has found that MDD and "other psychiatric" conditions have a very strong correlation with suicide (and, to use researcher's own proxy here, suicidality).

What this means is that 99.5% of patients (who are high risk) never even considered suicide. Where is this number supposed to go from 99.5%? The occurrence of suicide itself in the general population is 11.4 per 100,000 (0.01%) and we've already covered that suicide is incredibly rare compared to suicidality. Moving on.

What do we do when we can't randomly assign people in clinical studies? Observational studies! (Ignore the fact that you can't, technically speaking, draw causal links using observational data.) Into that breach ride analyses of antidepressant sales vs. suicide rates across time and location. What do they find? Here is an illustrative example from that paper (there are many more with similar findings):
The decline in the national suicide rate (1985-1999) appears to be associated with greater use of non-tricyclic antidepressants. Treatment of a greater proportion of mood disorders with SSRIs and other second-generation non-tricyclic antidepressants may further reduce the suicide rate.
Indeed, aside from increasing suicides they (oddly enough for an antidepressant) seem to reduce it.

Oh but that's not a study of adolescents you say? Stop it.
From an initial database of 656 studies, we identified and examined six studies. In the latter, nine of 574 young people (1.6%) who died by suicide had had recent exposure to SSRIs.
The question you ought to be asking right about now is, "What are reasonable expectations for these drugs?" We do not expect penicillin to cure every infection. We don't even expect surgical intervention to succeed 100% of the time. It is absolutely ridiculous to expect antidepressants to prevent suicides when they are predominantly given to those most likely to commit suicide.

As for why this treatment of suicide as a side effect is so pernicious, consult a psychiatrist.
In essence, these black box warnings reduce the complexity of suicidality into less of a behavior or activity and more of a reaction, a reflex.  You might think that's not the intention, and I'm sure you'd be right, but that is the result.   Note that no one is attributing any other complex behaviors to these meds as a side effect-- not increased marriage, or desire to learn French, etc.

Suicide thus becomes MORE of something that can happen to you and LESS of something you do because you feel a certain way.  It's subtle, but it matters.  It's a corruption of language to escape the-- well, nausea-- of massive freedom; it says behaviors occur, not get chosen.

1 comment:

  1. Reading this blog makes me thirsty for rum and want to buy a Volvo (both very rewarding activities).