Marginalizing the Psychiatric in Dementia Treatment

I recently came across an interesting  post written by Allen Power that raised the question of whether dementia is better thought of as a psychiatric or a neurological problem. Power argues that dementia is increasingly viewed as a psychiatric illness, with symptoms of distress and analogous to mental illness. Psychiatrists are brought in as “expert pill jockeys” to control behavioral problems with antipsychotic drugs. Power thinks this approach is wrong:

Dementia is not a psychiatric illness. It is a change in one’s experience of their surroundings and how they process information, based on structural neurologic changes. It is as much a psychiatric illness as would be a stroke. And people’s interpretations of the world around them may seem confused to us, but they are nothing like the symptoms of an organized psychosis.”

According to Power, the tendency to psychiatrize dementia leads us to overlook non-pharmacological interventions, which he argues have been shown to be the safest, most efficacious and most enduring ways to manage the behavioral problems associated with dementia. He concludes that it would be much better to view dementia as “a neurological disability with secondary psychological challenges,” and that psychiatry’s role should not be simply prescribing pills to control behavior  but helping with the broader psychosocial challenges that dementia entails.

I follow and admire Power’s work as a geriatrician and one of the leading critics of the dominant drug-based approach to treating dementia. But I think that there is an important historical issue that structures the problem he raises. Without understanding and explicitly confronting this issue, efforts to change the dominant approach are not likely to have much traction.

In arguing that that dementia is a neurological rather than a psychiatric condition, Power follows the dominant modern medical approach to dementia in placing cognitive symptoms, attributed to “structural neurological changes,” at the center and relegating emotional and psychological changes  to the periphery — mere epiphenomenal reactions to the primary cognitive damage. This is somewhat arbitrary since, for patients and family members at least, the emotional and psychological symptoms are often as prominent and disturbing as the cognitive ones. But it follows a deep historical tendency in modern medicine to view psychiatric symptoms and mental disorders as less legitimate because they are not clearly attributable to pathological structures in the body.

This bias emerged clearly, in the United States at least, in the late nineteenth century as the development of germ theory and microbiology created a more scientific approach to medicine. Acute, infectious diseases which could be attributed to a particular pathological agent and effectively treated with a specific drug increasingly became the paradigm of modern medicine, especially as antibiotics emerged in the twentieth century. Chronic illnesses, especially psychiatric ones, seemed less legitimate, and the medical specialties that focused on them lost prestige in the era of “the magic bullet.”

Psychiatry was further marginalized during this period by its overlap with the other medical specialty claiming expertise over the brain and mental phenomena, neurology. Though the distinction between the psychiatric and the neurological has perhaps always been somewhat arbitrary, neurologists during this period, especially in the United States,  were generally successful in associating their specialty with cutting edge science while psychiatrists struggled under the stigma of their historic association with asylums and chronic, incurable madness.

The history of psychiatry since the late-nineteenth century can be interpreted as trying to compensate for this marginalization. Leading psychiatrists of the period, especially in Germany, sought to put their field on a scientific basis commensurate with the advance of medicine as a whole by showing that mental illness could be linked to specific brain pathologies. When this approach failed, Emil Kraepelin turned toward a quantitative assessment of clinical symptoms as a more scientific means of defining psychiatric disorders. Freud and his followers meanwhile sought to provide a scientific basis for psychiatry by turning away from the intractable problem of psychosis and developing a unified, expansive theoretical framework to explain and treat the mind. In recent decades, armed with new insights into genetics and neurochemistry and new technologies for exploring the brain, psychiatry has returned to the dream of anchoring psychiatric symptoms and disorders firmly in the brain. But apparent progress in understanding the brain only perpetuates the marginalization of psychopathology that cannot be clearly associated with something specific that is wrong with the brain. Psychiatric symptoms and clinical disorders continue to have an ambiguous status unless they are clearly associated with specific pathological processes in the brain.

While Power provides a strong critique of psychiatry’s reductionist approach to managing the “problem behaviors” of dementia with antipsychotic drugs, in this post at least he ironically appears to endorse much of the mainstream medical concept of dementia that historically produced such reductionism. So long as a reductionist model of dementia as simply a brain disorder retains a near exclusive grip in medicine, so long as pathology is considered more real than psychiatric symptoms, let alone social and cultural factors, medical practitioners of all kinds are much more likely to respond with a prescription pad than with the agenda for  social and cultural change that Power calls for.


History and the Will to Power in the Alzheimer’s Field

It is a strange time to be a historian interested in Alzheimer’s disease.

On the one hand, since the putative one hundredth anniversary of Alzheimer’s disease, which was confusingly celebrated in both 2006 and 2010, the Alzheimer’s field seems completely enamored with its storied past. Virtually every major book, conference or news articles on contemporary Alzheimer’s research contains at least a brief account of the origins and development of the basic concepts of the Alzheimer’s disease field. On the other hand, as history, these accounts are typically superficial at best, and are often distorted and misleading. The Alzheimer’s field remains so distressingly unaware and uninterested in its actual past that one could wonder about whether it is afflicted with some sort of memory disorder—except that this alienation from a meaningful engagement with history is  quite typical of  contemporary society.

My quarrel with the way the Alzheimer’s field represents its history is grounded in what I see as the core value of historical scholarship. If being a historian means anything, it is a commitment to the complexity of the past. Our interest in history comes from questions and concerns we have in the present and the patterns we see in the past are inevitably shaped by this. But we must remember that the people of the past,  whose experiences we would try to understand and learn from, had lives fully as complex as our own, and did not necessarily know and share the concerns we have in the present. When we attend to the complexity of the past it can serve as a source of rich human experience to interrogate and learn from. But if we impose the concerns of the present too strongly, we silence the individual and collective voices we could learn from and produce a self-serving narrative that can only reinforce our own biases. Historians cannot accept facile, reductive accounts of the individual and collective lives of people in the past in order to serve our purposes in the present.

A recent infographic on the history of Alzheimer’s disease research has begun circulating that distills the field’s warped history of itself into a viral nugget. It’s a nice design that pulls together information on the prevalence and financial burden of Alzheimer’s, the growth in funding for biomedical research, a list of drugs that have been approved for treating Alzheimer’s, and a timeline of “breakthroughs in research” drawn from standard sources: the websites of the Alzheimer’s Association and other major organizations in the Alzheimer’s field and mainstream media sources like CNN and Consumer Reports. As a whole, the infographic makes it clear that this is, or soon will be, a history of  medical triumph:

It has been called incurable, and its diagnosis comes as a death sentence. But as research accelerates, new breakthroughs become the norm, and donations come pouring in, sufferers of Alzheimer’s and their families are finding new hope in the possibility of future research and treatment options. See the history behind Alzheimer’s disease, the milestones in its research, and the global push for a cure.”

Shorn of any clarifying or qualifying context, all of the information in the graphic is grossly oversimplified. But  the distortion of history is simply outrageous. The timeline begins with the early twentieth century work of Alois Alzheimer’s associating plaques and tangles in the brain with dementia, and Emil Kraepelin’s creation of the term “Alzheimer’s disease” in 1910. This is all true, but ignores the fact that their concept of dementia was quite different, reserving the term Alzheimer’s  to distinguish the rare cases of early onset dementia from the much more common form of senile dementia, which they seemed content to view as an extreme form of normal aging. Moreover, Alzheimer and Kraepelin did not view dementia  as a major public health issue, and they showed no interest in developing therapeutic interventions; rather, their interest in dementia was aimed at advancing the intellectual power and social authority of psychiatry.

Remarkably, the only “breakthrough” mentioned between 1910 and the 1980s is the  invention of the electron microscope that allows  “deeper study of the brain.” Inclusion of this in a timeline on Alzheimer’s research is inexplicable since its inventors had no interest in Alzheimer’s disease, and Alzheimer’s researchers would not use the electron microscope in their work until the 1960s.

But to anyone who has systematically studied the history of Alzheimer’s disease , even more inexplicable is that a major body of research on the dementias is ignored. From the 1930s through the 1950s, American psychiatrists wrote extensively about age associated cognitive deterioration from a psychodynamic perspective. While this work goes against the grain of more recent research on brain pathology, as I showed in my first book, it was highly influential at the time, challenging  the therapeutic nihilism that had characterized medical approaches to senility and contributing to a broader transformation in social expectations for old age without which the emergence of Alzheimer’s as a  major public issue would be impossible.

The creation of the National Institute on Aging n 1974 and the Alzheimer’s Association in 1980 are noted, but there is no explanation indication of  the  political machinations behind them. They are, it seems, the straightforward and inevitable  institutional forms that emerge to deal with a dread disease. These have indeed been important organizations, but suffice it to say the reality is a good deal more complex than this.

The remainder of the timeline is devoted to breakthroughs in drug treatment or that contribute directly to dominant trends in current research, such  as the identification of the beta amyloid and tau proteins, the discovery of gene associations, or  the development of the transgenic mouse model of Alzheimer’s disease. Each of the drugs that have been licensed in the United States for treatment of Alzheimer’s is noted, but the now out-of-date  “cholinergic hypothesis” on which they were based is ignored, as is the controversy that surrounded tacrine and the fact that all of these drugs have proven to provide little if any benefit to patients. The 1999 success of an Alzheimer’s vaccine in transgenic mice is noted, though not the fact that initial human trials proved a disastrous failure when it caused fatal swelling of the brain. The timeline culminates with the  use of a cancer drug to reverse Alzheimer’s in mice earlier this year.

Those not  afflicted with the sensibilities of the professional historian might wonder why this is so bad. Is oversimplification of the past really so bad if it encourages public support for research and provides hope for people who are struggling with Alzheimer’s disease?

My response is that an oversimplification of history encourages naive thinking about our present situation. This is a winner’s history, in which scientific developments in the past are included only if they contributed directly to the  research agenda that is dominant in the present. It is a history that serves the interest of the leaders in the Alzheimer’s field today, suppressing contingency, complexity, and uncertainty so that the discovery of a prevention or cure seems inevitable if only we will make the necessary investment  in their work. On this account,  a crude version of Nietzsche’s will to power not Alzheimer’s research on plaques and tangles in the brain, seems to be the real foundation of the Alzheimer’s field.

Such history inevitably leads to an inflated sense of the efficacy and importance of dominant social institutions like medicine, and ignores or marginalizes the importance of other social and cultural resources like art or spirituality that can and should be part of our understanding of the problem and part of our response. In the case of Alzheimer’s, this distorted history helps to rivet  attention on faulty molecules in the brain and to foster a desperate hope in the power of biomedicine to produce a cure.

We need to create richer, more profoundly optimistic understandings of dementia and how we might respond to it. Attending to the complexities of the history of Alzheimer’s research is a good place to begin.

Confusingly celebrated in both 2006 and 2010. Alzheimer described the case of Auguste Deter at a small, regional conference of German psychiatrists in Tübingen in 1906, and Kraepelin created the eponym Alzheimer’s disease based largely on this case in the eight edition of his influential textbook Psychiatrie, published in 1910. 

Nietzsche’s will to power. The nuanced version of the will to power, which can be found in Nietzsche’s original work, did not condone a simplification of history  to fit the wishes of those seeking to exert power and remains an important concept in history and theory. 

The Alzheimer’s “Hockey Stick” and the History of Late Twentieth Century Biomedicine

In this post, I use Google’s Ngrams to graphically summarize the history of Alzheimer’s disease, and discuss the direction of my current research.

My research on the history of Alzheimer’s began with a very simple historical question: how and why did the unlikely eponym Alzheimer’s emerge seemingly out of nowhere in the late 1970s to very quickly become a household word in the United States – one of the most feared medical diagnoses and the object of a one of the most well-funded disease-specific campaigns for public awareness and biomedical research in American history.  As can be seen in the google ngram below, which graphs the percentage of the millions of English language books in the Google database in which the term Alzheimer’s disease appeared every year from Alzheimer’s first publication of a case in 1906 to the year 2000, Alzheimer’s disease only became widely discussed in print after 1980. Since Alzheimer’s was available as a medical category from the start of the century, how do we account for the iconic “hockey stick” shape of public awareness about Alzheimer’s, suggesting that it only became a hot issue late in the 20th century?

It is commonly argued that this simply reflects heightened awareness of the aging of the population over the past few decades, making Alzheimer’s a much more visible problem. But a second ngram below for the terms old age, aged and aging suggests that it is not so simple. Though there is an uptick around 1980, public awareness and discussion of aging has been much more stable throughout the modern period. The sudden emergence of Alzheimer’s must be linked to something more specific in American culture and society.

My explanation of the Alzheimer’s hockey stick has two parts. First, of course, was that Alzheimer’s did not really emerge out of nowhere in the late 1970s, but was in fact part of a long developing set of cultural anxieties about aging, senility and selfhood. I argued in my first book that the roots of public anxiety about age-associated cognitive deterioration in American society and culture can be found in the mid-nineteenth century emergence of industrial capitalism, which saw ongoing heated public debate about whether older workers, burdened with what medical science characterized as inevitably deteriorating bodies and brains, could possibly keep up with the pace of industrial work and complex, bureaucratic management. Through the mid-twentieth century, as consumer culture became more deeply embedded in American society, anxiety about aging gradually shifted to whether older people could create and sustain a coherent self-identity in the vacuum of retirement and endless leisure, which by the late 1940s had become as an ambivalent reality for most Americans over the age of 65. The quick and dirty summary of my argument here is that, as selfhood became a more problematic cultural category in industrial and mass consumer society. as identity became more of a project than an ascribed status in modern America, old age increasingly became a site of cultural anxiety, and senile dementia – which destroyed the ability to craft a coherent and cohesive self-narrative – became a far more frightening condition, indeed one of the most frightening of all medical conditions in post-World War II America. And this fear set the stage for the explosive emergence of Alzheimer’s after 1980.

The second part of my argument about the Alzheimer’s hockey stick concerns the way in which Alzheimer’s disease was re-constructed as a specific disease category in the mid-1970s. From the time it was created by Emil Kraepelin and Alois Alzheimer in the early twentieth century, Alzheimer’s disease has always been a troublesome construction – at once one of the most stable and one of the most ambiguous of disease categories. On the one hand, Alzheimer and Kraeplin found it an interesting and potentially important disease category because it defined one of the very few psychiatric conditions with a set of clear clinical symptoms that could be correlated with an equally clear set of pathological structures in the brain. From their work to the present, Alzheimer’s has been understood within the same basic framework of brain pathology – the neuritic plaques and neurofibrillary tangles – correlated with a clear clinical picture of global cognitive deterioration. But on the other hand, it has been difficult to disentangle Alzheimer’s disease from aging. In 1910, Kraepelin created the eponym Alzheimer’s disease based on his protégé’s description of the characteristic brain pathology and clinical symptoms of senile dementia in a patient who was only 51 years old. Kraepelin thought that early age of onset – which he arbitrarily defined as before the age of 65 – was itself enough to warrant putting such cases in a separate disease category. The basic rationale was that while dementia in older people might be thought of as an extreme variant of the common if not universal cognitive decline associated with the normal aging process, the same condition occurring at an earlier age was very unusual and so could reasonably be thought of as a disease. Thus as conceptualized by Kraepelin and Alzheimer, this condition occupied a rather marginal nosological space. As senile dementia, it was a very common condition, but was so closely associated with aging that it seemed questionable to call it a disease. As Alzheimer’s pre-senile dementia, it seemed reasonable to think of it as a disease, but one that was vanishingly rare. This distinction persisted in the medical literature for decades, though it was clearly understood that they were essentially the same condition.

The distinction was eliminated by a group of American researchers, government officials within NIH and activist caregivers who sought to generate awareness and funding for biomedical research into dementia, and who were savvy about the political ramifications of disease categorization. In the mid-1970s, through a number of prominent editorials in various medical journals and NIH sponsored consensus conferences, they successfully argued that since there was no meaningful clinical or pathological distinction between Alzheimer’s and senile dementia, the two should be considered a single entity – and that entity, crucially, should be called Alzheimer’s. By combining the two categories, they could claim that the condition was a major health problem afflicting millions of people; by calling it Alzheimer’s rather than senile dementia, they could claim it was not “just aging,” but a dread disease worthy of a massive, publicly funded research initiative to understand its cause and discover a means of effective treatment or prevention. This strategy brilliantly harnessed deep anxieties about aging to the growing power and prestige of biomedicine in the second half of the twentieth century.

All of this can be read quite nicely into the final ngram below – the terms senility and senile dementia gradually rise in prevalence in English language books from the late-nineteenth through the twentieth century – culminating in the explosion of interest in Alzheimer’s disease after 1980.

My current research will go beyond these broad political and social ramifications of the emergence of Alzheimer’s as a major public issue to explore in some detail the medical world that was actually created by it. The massive investment of financial, institutional, and intellectual capital into research on the causes of and possible treatments for Alzheimer’s disease by both the federal government and private industry since the 1970s transformed dementia research from a small field with a broad agenda, to a massive multi-faceted research enterprise focused much more narrowly on pathological mechanisms. As I’ve remained peripherally connected to the Alzheimer’s field with various projects, I’ve been most impressed with the difficulty of maintaining coherent institutional and intellectual frameworks that connect and coordinate the efforts of diverse practitioners working on different agendas within modern biomedicine. In many ways what happened in the Alzheimer’s field reflects broader changes in the scale and structure of medicine during the same time period, so understanding how these developments shaped the way that researchers in the dementia field worked, the way that physicians diagnosed and treated dementia, the way that patients and their family members experienced both dementia and the treatment and care received by medical and health care providers can help move us to a richer, more nuanced understanding of what medicine has become since the latter half of the twentieth century.