Does lamotrigine work, or not?
(initiated 1/2009, completed 3/2009)
Summary: First an analysis appeared saying lamotrigine really was no better than a placebo. Then a more recent paper reached the opposite conclusion (and one author was on both papers!). The more recent analysis shows that people with severe depression respond better to lamotrigine than to a placebo, whereas for less severe depression, the results are not as clear.
First: It doesn't work better than placebo
Usually when two different research studies reach opposite conclusions, it is important to look at both of them. However, in this case, the same data were used in both studies. The second, more recent paper, just used a more sophisticated way of looking at the results (these are "meta-analyses", which take previous research and re-examine the results, combining multiple studies into a single grand tally).
Therefore, in this case, the first paper is somewhat moot. The second paper effectively replaces it. Here's the reference if you want it: Calabrese. If you're satisfied with that, you may skip to my summary of the second paper below.
However, no less an authority than Nassir Ghaemi, one of psychiatry's best logical thinkers, wrote a lengthy essay about evidence in psychiatry, and used lamotrigine research as an example. I would not dismiss his view lightly. He says that if research data do not support the efficacy of a treatment, then we should not use it. I disagree, despite having discussed my views with Dr. Ghaemi several times (in other words, he has not won me over entirely yet).
Here's my view, to contrast with Dr. Ghaemi's. The challenge for a clinician is to balance research results with clinical experience. Her or his experience is not useless. When research data do not jive with clinical experience, then we have to re-examine our practices. However, as Dr. Ghaemi emphasizes, the trick is not to be led by limited research data (especially when we see only the positive, published studies; not the unpublished, negative studies, which have been systematically hidden from us by the pharmaceutical companies; see his article for details there).
Dr. Ghaemi would very likely agree about a further risk: limited data might actually bias our view of our own clinical experience, because often one sees what one expects to see. If research results lead us to expect that a treatment approach really works, this will help us produce benefits for our patients, even if the treatment is no better than a placebo, because better placebo effects are generated when a clinician wants to help, and believes she is likely to help (this nature of placebo effects was understood as far back as as the 1700's, by the wayPhelps). To the extent that this occurs (to my knowledge it's not been studied), this is a serious problem. And yet ultimately I think -- if one is really paying close attention to what patients say about their experience -- treatments that really work better than a placebo will demonstrate their advantages, and those which do not will prove repeatedly disappointing, performing below expectations and thereby changing those expectations, increasing skepticism. Of course, one must also remember leeches and bloodletting. Physicians believed in their efficacy for a very long time.
However, the more recent of these two meta-analyses, to which we now turn, much better matches my clinical experience. It's a more refined analysis. So I give it more weight than does Dr. Ghaemi. See what you think.
Then: It does work better than placebo -- at least for patients with moderate-severe depression
Dr. Calabrese worked with a different team, on the same data, looking more closely at who responded to lamotrigine and who didn't. If patients with severe depression were examined separately from those with mild-moderate depression, a very different result emerged.Geddes
To understand the results in the graph below, you need to understand what a "meta-analysis" is and how its results are presented. If you're new to this, hang on, it's not that tough (you're about to get a simplified view of this statistical approach). If you know that statistical approach, skip to the Results.
In simple terms, a meta-analysis is like taking 4 different bus-loads of people going to a football game, putting them in the same room, and asking "who are you rooting for?" Whereas any given bus might be overwhelmingly for the Beavers, and another clearly in favor of the Ducks, when you put all 4 buses together you get a more representative sample of the attendees at this game. Not perfect, of course, but better than a sampling a single bus, right?
Lamotrigine was researched as a treatment for bipolar depression in 5 different major studies. In four out of five of them, it was no better than a placebo (leading to the earlier Calabrese paper noted above). But in each study, there was a clear trend toward being better than a placebo. It's as though in each bus the crowd is leaning toward the Beavers, but there are a significant number of Ducks fans in there diluting the enthusiasm.
But when you combine the folks from all 4 buses, the room is now quite overwhelmingly filled with Beaver fans drowning the plaintive cries of Duck lovers. Hey, in Oregon, this really happens, every year. The U. of Oregon is just down the road from my little Corvallis, home of the mighty Beavers. Well, not so mighty, most of the time; but many people here live in annual (perpetual?) hope of a triumph. At least every year there's a chance to beat Oregon. But I digress...
The point: in the graph below, you'll see each of the 5 individual studies displaying results, lamotrigine versus placebo. But then those results will be lumped together, and averaged -- and presto, where most of the studies did not show lamotrigine as better than placebo, when the studies are averaged, the medication does emerge as superior. How can that be? It's as though in each bus, there are slightly more Beavers than Ducks. For any given bus, the difference is small, almost unnoticeable. But if you put enough busloads together, then you can see the numerical superiority of the Beaver fans. Not their teams, necessarily...
Okay, now let's look at how this appeared in the key graph from that study.
In case you understand how the results of a meta-analysis are displayed, I'll just show you the result first. But if this graph means little or nothing to you, jump to the next bit of text and I'll walk you through it, okay?
Two sets of results are shown here. The first set of 5 squares appears above the subtotal indicated by the upper open diamond, presenting the results for patients, in 5 different research studies (shown by code number), whose initial depression scores were not very high. Less than 24 on the Hamilton Depression Rating Scale (HDRS) is not severely depressed, but it's not mild depression either. A person can get in a depression study with an HDRS of 17 or more. (I'll tell you what the squares and the lines mean in a minute, if you're not familiar with these graphs.)
By contrast, the second set of squares above the widest open diamond present the results for patients with moderate to severe depression -- HDRS 24 or higher when they entered any of these same five research studies.
As you can see, the squares for the more severely depressed group (HDRS > 24) are farther to the right. Here's what that means. The bold vertical line marks the question "was lamotrigine better than a placebo?" If a square is to the right of the bold vertical line, the answer is yes. (I won't confuse you with the meaning of the other lines and the square sizes, but have explained those in a note below* if you'd like to hear more).
As you can see, for the more severely depressed patients (the lower set of 5 squares), lamotrigine was "more better" than placebo, versus the less severely depressed patients, where lamotrigine is much closer to placebo. Technically the average of those upper squares (represented by the upper open diamond) does not statistically outpace placebo, whereas it does, officially, for the lower set (lower open diamond).
*Further explanation of chart details: