Monday, August 30, 2010

Confessions of a Psychopharmacologist


Half of the people can be part right all of the time,
Some of the people can be all right part of the time,
But all of the people can't be right all of the time.
I think Abraham Lincoln said that.
"I'll let you be in my dreams if I can be in yours."
I said that.

Bob Dylan, "Talkin' World War III Blues"


The other day a new patient referred to the "psychopharmacologist" he used to see in the major metropolis he used to call home. It occurred to me that just as the Moliere character learned that all along he had been speaking in prose, for the past year I have been functioning as a psychopharmacologist, although without the presumably higher fees that that slightly pretentious title might attract. It does sound better than medication manager.

In this role I do not, if only for my own sanity, regard people as merely bundles of symptoms or diagnoses; I do my best to understand them as whole individuals, with the often brute realities of medical morbidity, economic desperation, social chaos, and family dysfunction that generally color their lives far more than any psychological subtleties do. But in the clinics where I work, almost all patients are required to have either a psychotherapist or a case manager in addition to medication visits, so tending to compartmentalize like humans do, they soon associate me with an important but very circumscribed role.

Why have I drifted away from psychotherapy, apart from the inertia of now well-recognized market forces? I think a recent Psychology Today post by Michael Bader, D. M. H., provoked by the already notorious Daphne Merkin article in the Times, helps to explain why. A self-described psychoanalyst, Bader dissents from what he views as the insular, navel-gazing propensities of organized psychoanalysis. His piece should be read firsthand, but basically, he accuses psychoanalysis of emphasizing the often endless exploration of theory and nuance while neglecting practical measures that may relieve suffering.

I took Bader's argument to be directed not only at psychoanalysis per se, but at any open-ended psychotherapy that valorizes ambiguity and self-knowledge over any specific notions of outcome as conventionally considered. The natural rejoinder to this is that Bader has erected a straw man, that any good psychoanalyst will include pragmatism in a sensible course of treatment, and that only a tiny minority practice in the purblind, cloistered Manhattan fashion described in Merkin's article. I would be curious to hear the reactions of any therapist readers (and you know who you are).

Even though I have never had formal psychoanalytic training (although I have been widely exposed to its tradition and rationale) and do not currently practice psychotherapy, I felt a guilty twinge of recognition, for the kind of infinitely curious investigation of uncertainty that Bader assails is exactly the kind of process that I find interesting, and the deeper and the more ambiguous the better. Give me understanding over mere symptom relief any day. However, it seems to me that that sort of depth psychology has been threatened less by medications than by competing therapy models (and perhaps by changes in sensibility characterizing the culture at large).

The view from here is that psychotherapy is ever more associated with time-limited, evidence-based, measurable cognitive and behavioral procedures. I am the first to say that these pursuits help many people, and I support them absolutely. Look at the self-help aisle--the majority of folks seek basic recipes for meaning and contentment in life. The trouble is that from a practitioner's standpoint, I have no interest whatsover in the simple, sunny platitudes of cognitive-behavioral therapy, the well-worn ruts of the seven secrets of highly effective people or whatever.

If or when I do set up my own practice eventually, any psychotherapy I would undertake would be openly in disregard of any simplistic symptomatic outcomes, and indeed would not profess to "treat" any DSM mental disorder. The impractical is merely another name for that which is worth doing for its own sake. Who am I? How to live? What to do? These are the questions I find interesting in psychotherapy. That is not to say that one should self-indulgently wrap oneself in a cocoon of existential stupor. Proximate measures of well-being underlie higher-order inquiries. Eat well, sleep well, exercise. But these things are common sense. I am intrigued by those things that aren't common sense.

Daphne Merkin may have been unfair to those analysts. They offered her profound experiences of self-exploration; what she was seeking was symptomatic improvement. Those contrasting expectations should have been made clear. Psychoanalysis above all treats...a hunger for psychoanalysis and all that it entails, and there is nothing whatsoever wrong with that. Any person's suitability for psychoanalysis depends far more upon intellectual disposition (i.e. faith and steadfastness in the process) than upon diagnosis.

If or when I return to providing psychotherapy, it will be based on unpredictable contingency, much as life is. Yes, a therapist must have expertise, a certain self-awareness and wisdom, including firm boundaries, but beyond that the endeavor floats on a sea of uncertainty. I do share Freud's tragic view of life. Short of that, I will continue to work on biological modification of the brain, the means of which I will tackle next post.

Sunday, August 29, 2010

Just Because


Some say they're goin' to a place called Glory and I ain't saying it ain't a fact
But I've heard that I'm on the road to purgatory and I don't like the sound of that
Well, I believe in love and I live my life accordingly
But I choose to let the mystery be

Iris Dement


I have in mind a couple of posts on psychotherapy and meds, but for today just another piggyback on NPR's excellent 13.7 blog, where Ursula Goodenough posts on matters of ultimate questions. Her last name, while not in fact made up for this post, is eminently suitable and must have had an effect on her formative development (unless, oops, it is her married name).

Her post about her own "covenant with mystery" is interesting in its own right, but I was particularly struck by her quote from an unnamed contributor to a listserv at the "Institute on Religion in an Age of Science," for the comment expresses, more clearly and succinctly than I have achieved, the ecology of belief:

While theism per se may seem irrelevant from several perspectives, the impulse underlying it is not. The concept of a personal God is one way of envisioning the ultimate source or organizing force of all that is. Many feel this image has flaws. But unless an alternative is adopted in its place, the absence leaves a Big Question, and gaping holes in understanding/belief are uncomfortable. I agree that understanding or appreciating Reality does not require Theistic causality. But, until a naturalist perspective can offer some type of image of the ultimate that can both be grasped and feel right, it will remain lacking in something essential.

As Goodenough notes, there is an irreducible subjectivity to belief, relating to how it makes one feel; it is a deeply personal matter, as much so as the kind the person one falls in love with. For agnostics, theism does not "feel right," whereas for believers it does, or at least it feels right enough. Similarly, for me naturalism feels good enough, but I recognize that this puts me in the distinct minority, both historically and currently (and perhaps futuristically as well).

Today's loud atheists, the Richard Dawkinses and Daniel Dennetts of the world, are good scientists and philosophers but poor psychologists. Projecting their mentalities upon mankind, they would deprive the majority of their spiritual bread while putting in its place something that, for that majority, tastes of ashes.

I think it was Emerson who wrote somewhere that the genius believes that what is true of himself is true of all humanity. Yes, there is overlap between genius and narcissism, but it is only partial.

Tuesday, August 24, 2010

Truth and Popularity

The last comment provoked some thoughts which probably warrant a new post. What is the relation of truth/value and popularity? The impulsive answer is none at all; indeed, the very test of moral integrity resides in its indifference to polls, in political terms. However, I don't know that it's that simple.

It's obvious that physical/scientific truth bears no relation to popularity; if global warming is primarily the result of human activities, this will be the case even if only 5% of the population believes it to be so. In contrast, I would suggest that advocates of both moral and aesthetic truth do aspire to majority confirmation, but crucially over the (very) long term.

I think it was the ancient Greek Solon who reputedly said, "Call no man happy until he is dead," by which he meant not that death brings happiness (although that is certainly one interpretation), but that a life can only be judged in its totality, when all the results are in. An otherwise blessed or virtuous life can go seriously sour in its final years, just as an otherwise benighted life can see redemption toward the end. Similarly, moral and aesthetic values may be measurable only over the lifespan of the human species.

At any given point in time, Homer or Shakespeare enjoy fewer readers than, say, John Grisham or Dean Koontz, but in addition to the fact that the former enjoy much more sophisticated literary apologists than the latter, their readership is multiplied over the generations. In terms of sheer cultural influence, readership over time matters more than mass readership within a short period; I can't imagine what the "value multiplier" might be--perhaps 1 million readers spaced out over a century matter far more than 100 million readers concentrated in a year. So while at any given time a taste for Homer would have to be considered a minority or niche interest, in the grand scheme of literary influence it is very much a mainstream taste. Aesthetics is a popularity contest, but those amassing "hits" or publishing figures today are likely the hares who will lose out by far to the turtles in the end.

Similarly, while an avid support for human rights may have been a minority position in Hitler's Germany, in the prevailing context of moral thought, it is now the majority position. There are certain moral convictions--equality for women, gay rights, vegetarianism--that, considered from the point of view of world history, remain very much minority views, but the hope is that over the very long term of centuries of millenia, these will become dominant opinions. When George W. Bush stood by his decision to invade Iraq, he may have been thumbing his nose at contemporary popularity, but he was wagering that in the much larger court of future historical and public opinion, he would be vindicated (we'll see).

As the cliche goes, from the standpoint of science, a tree falling in the forest does make a noise even if no one is there to perceive it. The interesting question is: does a moral conviction hold true even if it doesn't attain its majority? Take the equality of women for example. Many of us adhere to this position even though it has been violated throughout much of history as well as in the world today; but as I said, we hope and intend that in the future of humanity it will become a default moral value. But what if human life had been abruptly ended in, say, the year 1800 by a massive asteroid? Could it be said that the equality of women was somehow a "true" moral value for Homo sapiens even if it had scarcely been observed by the species up until that time?

I think this is true primarily from the point of view of logic. Inasmuch as we are rational creatures, we admire consistency in moral views. So if a premise of one's moral system is that human beings should be treated equally in terms of basic rights, then the equality of women and minorities is logically implied whether or not human beings are around to recognize this. So even if civilization were ended by an asteroid tomorrow (or if Republicans took over the country in perpetuity), gay rights may still be logically valid as an implication of the values reflected by the Constitution. However, this is arguably a formal and somewhat empty triumph. When we press our moral views, we aim not only to persuade our current peers, but also to set precedents that may contribute to crucial influence over time.

So when we argue values, whether moral or aesthetic, the stakes are large--we lobby for the kind of human world we hope to see instantiated in the future. If only a few will join us now, we hold out hope that we may campaign for coming generations. A scientist who feels certain that he has arrived at the empirical truth on a certain matter may be blissful in the knowledge even if it is not much appreciated. But an artist or moralist, while he may tolerate contemporary disdain, must--if he is not solipsistic--be disappointed to know that his vision would be relatively rejected not only now, but also in perpetuity. But I see this, in the best cases, not as an issue of narcissistic admiration, but as one of shared respect for a human ideal. Even those artists closest to schizoid pathology make a bid for connection to future readers. Emily Dickinson did not publish for the most part...but left her poems to be found. Franz Kafka bid his works be burned...but did not burn them himself.

Sunday, August 22, 2010

Piscivores


"...until everything
was rainbow, rainbow, rainbow!
And I let the fish go."

Elizabeth Bishop, from "The Fish"


There is a book out that purports to demonstrate that fish feel real pain, which is not a mind-boggling surprise. At Salon Linda Kernohan discusses her vegetarian vicissitudes, including occasional ambivalence about fish, experiences that are pretty similar to my 18 years of meat-avoidance. The main difference is that I never aspired to veganism--it always struck me as too fanatical and puritanical and, ironically, as a kind of denial of the intricate web that we cannot help sharing with other creatures.

To live is to do damage, even if it is to the insects trodden unawares underfoot. There is also the fact we all benefit, as eventual patients, from animal-related medical experimentation. We affect animals by doing anything that emits carbon. It is disingenuous to suppose that in any aspect of our lives we do no harm. That is no argument, of course, to abandon harm-reduction.

I think vegetarianism turned me off on pure philosophy forever. Let me explain. When I first encountered Peter Singer's Animal Liberation in my early 20's, his arguments seemed so prodigious, and ultimately so irrefutable, that I was converted and remain so to this day. However, it didn't take long to realize that not only friends and family, but the vast majority of the human race, remain unmoved by such arguments. I am hard-pressed to think of any other routinely accepted behavior that has such little ethical justification. (Friends and family remain non-vegetarian, which I fully accept of course, and I do not foist my diet upon my children).

The only competitor to meat-eating as mainstream ethical violation may be the blithe acceptance of vast swaths of absolute poverty by prosperous first world citizens, of which I am every bit as guilty if not more so than the next person, so my ethical achievement, such as it is, is selective. Interestingly, Peter Singer has written passionately and prolifically about that issue too. At any rate, these examples convinced me of the relatively small role that justified reasons play in human conduct. So much the worse for philosophy, which always sends me back to religion, literature, and psychology, that is, to the constellation of contingent human needs, among which formal ethics constitutes only a minor part.

Kernohan's vegetarian experience is interesting in a couple of respects. She mentions not missing meat, so it's not as if her day-to-day life is some kind of triumph of self-denial. I recall enjoying meat twenty or more years ago, but it does not feel like a privation. Like her, I eat fish rarely, maybe once every month or two, and it's hard to say why. Maybe it is knowingly wicked self-indulgence (vice on a very small scale indeed).

Kernohan also raises the intriguing issue of depression and vegetarianism, in this case wondering whether subtle dietary deficits could affect mood in ways we don't yet fully understand. However, while she acknowledges other factors that of course may be related to depression, I wonder if she has it backwards. That is, could a depressive tendency and a certain moral and emotional squeamishness predispose to vegetarianism? Who knows, maybe not only vegetarianism, but political liberalism as well, could for some be outgrowths of Melanie Klein's depressive position, a morbid intolerance of suffering? As Nietzsche wondered, if one is virtuous at all, is one virtuous from a position of strength or from one of weakness?

Sunday School



"The whole of existence frightens me," protested the philosopher Soren Kierkegaard; "from the smallest fly to the mystery of the Incarnatiion, everything is unintelligible to me, most of all myself." By contrast, the evolutionary reductionist Ernst Haeckel, writing in 1877, commented that "the cell consists of matter...composed chiefly of carbon with an admixture of hydrogen, nitrogen and sulphur. These component parts, properly united, produce the soul and body of the animated world, and suitably nourished become man. With this single argument the mystery of the universe is explained, the Deity annulled and a new era of infinite knowledge ushered in." Since these remarks of Haeckel's, uttered a hundred years ago, the genetic alphabet has scarcely substantiated in its essential intricacy Haeckel's carefree dismissal of the complexity of life. If anything, it has given weight to Kierkegaard's wary statement or at least heightened the compassionate wonder with which we are led to look upon our kind.

"A conviction akin to religious feeling of the rationality or intelligibility of the world lies behind all scientific work of a high order," says Albert Einstein. Here once more the eternal dichotomy manifests itself. Thoreau, the man of literature, writes comopassionately, "Shall I not have intelligence with the earth? Am I not partly leaves and vegetable mould myself?" Or Walt Whitman, the poet, protests in his Song of Myself: "whoever walks a furlong without sympathy walks to his own funeral drest in a shroud."

Loren Eiseley, from "Science and the Sense of the Holy"

Friday, August 20, 2010

Unanswerables

"The mystical is not how the world is, but that it is."

Wittgenstein


Marcelo Gleiser at NPR's 13.7 blog speculates about the ultimate inability of science to explain the first cause of the universe. The fact that there is something rather than nothing may not be a demonstrably empirical matter. I suppose the three great and still unsolved questions are: the origin of the universe, the inception of life, and the development of consciousness. Many have suggested that our minds may not be structured to solve some puzzles, ever.

Of the three, it seems to me that in the long run biology itself, the coming into being from the lifeless muck of self-replicating and eventually self-assembling organic systems, will be the "easiest" to account for. And I think that over time, even if it takes centuries, we will arrive at a decent understanding of how neural networks generate subjectivity, although the specifics of individual subjectivity will always retain some obscurity inasmuch as a particular consciousness is epistemologically a self-enclosed system (i.e. no outside agent could fully understand what it is to be me without in fact becoming me and in the process ceasing to be himself).

But the source of reality itself is a totally different kind of question, and one that may not in fact be scientific at all. For me the strangeness of the matter is that I can't even imagine what an explanation for the universe would look like or how it could possibly be satisfactory. One possibility is that the universe (or rather some grand multiverse from which our universe sprang) has always existed. But somehow I find this kind of infinite regress distasteful. In fact, infinity itself is distasteful except in the abstract.

However, neither am I happy with the notion that the universe had a specific origin, before which or outside of which there was truly nothing (such that "before which" or "outside of which" have no meaning inasmuch as time and space do not exist outside of the universe). The human brain evolved with assumption of limits and of agents. Theistic accounts of creation, while intellectually unsatisfactory in all sorts of ways, nonetheless are easier to relate to. It is somehow emotionally easier to imagine that a stipulated God has simply always been than that a neutral multiverse has always been. I'm not sure why this would be so.

I have the same problem, actually, with space. That is, it is equally disturbing to imagine the universe as spatially infinite as it is to imagine that it would be even theoretically possible to arrive at a physical point beyond which there is non-being. That suggests that time and space are simply limiting frameworks of my contingent mind. So speculating about why the Big Bang happened or where it came from may be like aspiring to stare directly (without mirrors, etc.) at the back of my head.

No, I haven't been getting high or reading Heidegger. But philosophy is a kind of willful stupidity, the refusal to accept the obvious as obvious. To ask why reality not only follows abstract physical laws, but also exists at all (which is not required by those laws) is something like asking how one knows reality is "out there" at all rather than a mere dream or illusion of consciousness; both questions vainly seek for something within experience to justify the basic condition of experience at all. And with that one leaves the desert of philosophy for an oasis of common sense, and the weekend.

Wednesday, August 18, 2010

Emerging Adulthood


What's to come is still unsure:
In delay there lies no plenty;
Then come kiss me, sweet and twenty,
Youth's a stuff will not endure.

Twelfth Night


The Times today has a long article on "emerging adulthood" (the tweens), increasingly advocated as a newly normative stage of psychological development. It is the ever more de rigueur period in which twenty-somethings "find themselves," bouncing in and out of jobs and relationships, moving back home, etc. It struck me because of the similarities with "new" psychopathological syndromes, such as increasingly mainstream obesity, (adult) ADHD, soft bipolarity, etc.

One similarity is the jostling of neuroscientific and cultural explanations. On the one hand, brain scans suggest that the pre-frontal cortex continues to develop until age 25, so why shouldn't we expect emotional and cognitive development to be an adventure until that time? Well, there is the fact that throughout much of the world and throughout history, sociological maturity kicked in well before age 25, that is, "emerging adulthood" is a phase that many have apparently been able to skip when needed.

The piece suggests that "emerging adulthood" may actually be an artifact of prosperity; basically, twenty-somethings lollygag around because they can, because they live in the most well-off nation-state in the history of the world, and their parents are willing and able to indulge them. To be sure, this is a mixed blessing: too much promise and too many choices can be burdensome. As I have argued before, the vices of the rich are now the vices of the middle class, who are rich beyond the dreams of avarice compared with most human beings who have ever lived. And yet we try to use neurobiology to justify this sociological state of affairs. It is the brain that adapts to the environment and to society, not vice versa.

And yet this needn't be a pejorative development. Life span is increasing, reproductive technologies augment the opportunities for child-bearing, and the retirement age may eventually be 70 or 75, so what is the hurry, exactly, to settle into the grind of work and children? The crucial change is in attitudes and expectations--the stigma of living with one's parents until age 30 is not so biting. Some sort of cultural tipping point has occurred, which may or may not have anything at all to do with neurobiology. This is allegedly about normal psychology, but it parallels metamorphoses in psychiatric diagnosis. We collectively decide what is normal, then look to science to try to justify the decision.

If Mama Ain't Happy...



What's the ugliest
Part of your body?
What's the ugliest
Part of your body?
Some say your nose
Some say your toes
But I think
It's your mind

Frank Zappa and the Mothers of Invention


"Since You Asked," Salon's advice column, features a strikingly detailed case of adult children trying to manage a parent's mental illness, in this instance severe anxiety and somatoform disorders superimposed upon baseline character pathology.

What is frustrating of course is the apparent lack of information and insight provided by psychiatry. The woman in question has no insurance, which in our benighted "health care system" renders her up a creek to begin with. To be sure, late-life anxiety and personality disorder are tough to treat--benzodiazepines can be risky, and rigid resistance to therapy is common. But one would have thought the family at least could have obtained a prognosis and suggestions for containment and harm-reduction, which psychiatry must be able to provide if nothing else.

Cary Tennis's "advice," such as it was, was unusually muted in this case: basically, deal with it (you will find the strength somehow) or don't deal with it (cut her off). Maybe he should have suggested that they plant themselves in a shrink's office and refuse to leave until they get an answer. As I've increasingly come to think recently, realistic prognostication is a lost art in psychiatry.

Much has been written about medicine's futile attempts to stave off inevitable death. Psychiatry is not directly involved in that fight, but it has its own counterpart, a perpetual stalling action in which medications and therapists are thrown at refractory symptoms willy-nilly in the notion that some day, somehow, either placebo effect or spontaneous remission will kick in. And perhaps they will, but patients and families should be told up front about the likelihood of that actually happening. Oops, honesty of that sort might disrupt the very placebo effect that one holds out hope for. So one can only steer between pessimism and disingenuousness.

Diagnosticism


Premodern umpire: "I call 'em as they are!"
Modern umpire: "I call 'em as I see 'em!"
Postmodern umpire: "They ain't nuthin' 'til I call 'em!"

(Attribution?)


In the current Psychiatric Times Ronald Pies, M.D. pooh-poohs the proposed diagnosis of "hypoactive sexual desire disorder" (I can't find an online link for it yet). I hold no brief for that particular problem (sounds like enhancement to me), but I found his article notable for his suggested approach--the "desert island test"--to defining mental disorder.

Specifically, Pies maintains that a disease is one which would cause both distress and incapacity with respect to even the kinds of basic survival functions needed for castaway solitude. Even apart from the objection that such isolation would provoke serious emotional problems in most people, it seems like an awfully restrictive model. For all mental disorders are exquisitely sensitive to stress and crucially contingent upon context, and for Homo sapiens, stress and context are primarily interpersonal. I can think of any number of severe schizophrenics, bipolar folks, and of course substance abusers who--again, if they could tolerate the loneliness--might function surprisingly well on a desert island.

Pies's model is an example of a common desire to clarify the bounds of psychiatric diagnosis by distinguishing endogenous from "merely" situational syndromes; the difficulty is that people cannot be fully understood apart from their situations. But it brings to mind the notion of mental disorder as one that impairs evolutionary fitness; this is an idea that aims to get at some primal ideal of (healthy) human nature, one free of all the dross of contemporary cultural pressures and expectations. Again, the problem is that human beings evolved as deeply social creatures, so the impact of social and cultural context is inextricable from human nature.

In an effort to dismiss mere cultural consensus as a source for psychiatric diagnosis, Allan Horwitz and Jerome Wakefield write in The Loss of Sadness: How Psychiatry Transformed Normal Sorrow Into Depressive Disorder:

Moreover, when concepts of disorder are equated with whatever conditions are called disorders in a particular group, the possibility of scientifically evaluating and critiquing these concepts is lost. Also lost is the commonsense understanding that a culture could be wrong in its judgments about disorder. For example, the Victorians were wrong in believing that masturbation and female orgasm were disorders, and some ante-bellum Southerners were wrong in holding that runaway slaves were suffering from a mental disorder. But if disorders are just culturally relative conditions, then we cannot explain why these judgments were wrong, because those diagnoses did indeed express the values of their times. (p. 219)

This seems like a fine bit of epistemological panic to me, as if the lack of scientific evidence leads inevitably to mere relativism. What if diagnostic guidelines are rooted not in science, but in the same kind of rigorous and argued (but not incontrovertible) consensus that prevails in, say, ethics and law? After all, slavery and sexism also expressed the values of the 19th century, but we can firmly believe and argue that they were deeply wrong. Diagnostic guidelines are in fact made up as we go along, but only in the same way that the courts "make up" the law as they go along, that is, based on reasoning and rooted in prevailing cultural values. In fact, psychiatrists are something like judges, applying precedent to the circumstances of a unique case. Similarly, an umpire's calling of balls and strikes is inherently subjective, but it is a practice situated in accepted guidelines for the strike zone.

Attempting still to keep diagnosis tidy, Horwitz and Wakefield write:

Problematic mismatches between human nature and current social desirability such as adulterous longings, male aggressiveness, or becoming sad after losses are not in themselves disordered. For example, it may be fitness enhancing in our culture not to have tastes for fat and sugar, but that does not mean that people who have such tastes are disordered; that is how we were designed to be, due to conditions that existed when we were evolving. (p. 220)

They seem to imply that such entitites as ADHD, obesity, and substance abuse are therefore cultural pathologies or toxins, having nothing to do with individually diagnosed disorders. However, they immediately go on to qualify this:

However, sometimes environmental conditions that are too different from what is evolutionarily expected can produce real depressive disorders because people were not naturally selected to function in such settings. Modern warfare, for example, leads many soldiers to develop mental disorders that persist far beyond the immediate combat situation because the human brain was not developed to function under such conditions. (p. 220)

It is hard to see how a situational background like war is more productive of individually diagnosable disorders than, say, the easy availability of abundant calories. Is the implication that obesity is merely a personal choice, whereas trauma is not?

There is a real biology of differences of emotional responsiveness, interpersonal relatedness, stress resilience, etc. just as there is a real physics of a baseball's trajectory over the plate. Technologies of biology and physics can modify these processes with greater or lesser success. But what we define as pathology or as balls and strikes can never be a matter of science; it is a matter of reasoned consensus.

Human biology and human nature are not equivalent concepts; human nature also includes culture and consciousness and is therefore self-modifying and self-questioning. The laws of biology are universal, but the contents of biology--what kinds of organisms actually exist at any given time--are contingent. Similarly, there are sociological "laws" of diagnosis inasmuch as pretty much all human cultures have implicit or explicit categories of health vs. sickness, but the contents of those categories may justifiably vary across times and places. Diagnostic categories are not entities we discover, they are entities we decide on.

Why do people keep trying to ground nosology in science? Perhaps because ours is a fractious and often fractured culture, such that consensus is very difficult to achieve, and in psychiatry there is no body with the authority of the Supreme Court. Some diagnoses are straightforward--severe and persistent mental illness is no more conceptually ambiguous than, say, murder (which isn't to say there is no ambiguity at all). Views of the proper bounds of ADHD or depression, in contrast, may vary as much as if not more than views of abortion or gay marriage--in all of these cases there is no account that is eternally or "scientifically" valid; there are merely competing claims of harm vs. an ideal of the good.

Monday, August 16, 2010

Monday Moralisms

Just a few tenuously linked cogitations (or cogitated links) today:

1. Paul Kingsnorth laments the watered down sort of environmentalism that focuses on sustainability, which inevitably means sustainability of...humans with their self-absorbed, energy-wasting ways. The apparently true environmentalism that values the natural world for its own sake seems forever in retreat. However, nature in itself is nothing but a human value. Outside of Homo sapiens, nothing in nature would blink if tomorrow the moon slammed into the earth. I always think of Wallace Stevens: "Except for us the total past felt nothing when destroyed."

This it not technically true, as we think pretty strongly that a range of non-human organisms have sentient awareness. But we ourselves are such linguistic beasts through and through that, in large part, if it isn't at least potentially articulated in some way, it isn't fully real. Human beings evolved to care about human values, which includes nature, but only in competition with other values.

Some people like their nature mediated. Henri Rousseau, who painted the image above, drew his inspiration not from traveling to the tropics, but from the local zoo. Similarly, I have enjoyed so many nature programs on the Serengeti over the years that I think an actual safari would be a letdown, and not just because of all the other cheesy tourists perched atop jeeps pointing at the non-plussed zebras. Imagination fruitfully expands reality. For better or worse, earth is the human planet until we're gone. The discovery of life elsewhere in the universe, even if it isn't "intelligent" (or perhaps especially if it isn't intelligent), might be a great comfort to some: life existing beyond our capacity to despoil it.

2. The media (the cardinal manifestation of the very human capacity for mass hysteria) doesn't do environmentalism many favors. For weeks we heard that the gulf oil spill was a kind of toxic stake thrust into the heart of oceanic nature. We heard that it may never recover. What do we hear now? That most of the oil is gone and scientists are having to work hard to demonstrate any clear-cut damage to the ecosystem. When people develop alarm fatigue from this sort of thing, is it any wonder that skepticism over global warming persists?

3. Allen Frances, M.D. in the Times decries the possibility of treating normal grief reactions with antidepressants. It is an indication of the bizarre two-sidedness of psychiatry that half of the commentariat complain about psychotropic medications not working, while the other half fret that they may work too well for the wrong indications. Of course, people have been obtaining benzodiazepines and other sedatives for grief symptoms for decades.

This is a great example of the importance of context, and why brain scans and rating scales ultimately play a small role in making diagnoses. For a diagnosis is not primarily about biology or symptoms, it is about the human meaning of what is going on. A psychiatrist who denies a patient an antidepressant for normal grief is acting as a kind of arbiter of interpersonal and cultural well-being.

4. Sharon Begley at Newsweek discusses research suggesting why irrationality may have been favored by evolution. Reason evolved not to arrive at disinterested truth, but to persuade others of a point of view (Nietzsche's truth as a "mobile army of metaphors"). And arguably "disinterested truth" is the most convincing point of view of all. We evolved as promiscuous, murderous sophists--how then has moral progress been possible? Can psychology explain that I wonder? Is reciprocal altruism all there is? If truth is ultimately pragmatic, it ceases to do its work once we see it as merely pragmatic; disinterested truth is the fiction in which we must believe in order to sidestep nihilism. One must willfully overlook the contingency of one's values in the way that a batter at the plate must forget about everything except the pitcher.

5. Dave Pell at NPR considers technology and how it can distract us from "the big picture," which in his case involves atrocities going on in Afghanistan. It made me wonder though, what is this "big picture" that people talk about? Is it more like a painting, or a photgraph, or a collage? Is it a composite of all the "little pictures?" For some people the big picture is philosophy, for others science, for others religion. Of course, the picture can get too big, right? From the perspective of the universe or infinity, nothing less than totality matters. I suppose the art of living is the art of perceptual focus, of arriving at the "just right" picture.

Tuesday, August 10, 2010

Where the Wild Things Used to Be

The other day my eight-year-old asked, "Daddy, what does 'integrity' mean?" My heart warmed--it was a Norman Rockwell moment, my chance to impart one of the primary virtues. I tried to explain it in an age-appropriate way, and asked why he inquired? "Oh, it's also a brand of alarm system." (After having moved on from a great enthusiasm for natural disasters, his current preoccupation is with smoke and fire alarms and other indicators of incendiary mayhem and transgression). More Jackson Pollock than Normal Rockwell. Hopefully he'll remember that it's not just a brand name.

This came to mind when I read Anne Applebaum's Slate post on ADHD in literature, specifically embodied by Tom Sawyer and Huckleberry Finn. It has been many years since I first read of their adventures in oppositional defiance, but she seems about right that while Mark Twain doubtless romanticized their naughtiness, they lived in a time far more tolerant of disobedience, distraction, and disregard for academic achievement. Or maybe "tolerant" is the wrong word: it was an age that left more space, both physically and psychologically, for such things.

In Madness and Civilization and other works, Michel Foucault argued that around the middle of the last millenium, when the first glimmers of the Enlightenment appeared, Western civilization began to become distinctly less hospitable toward mental disorder. The mad, who for centuries had wandered more or less unmolested along the margins of society, came to be seen as a greater threat to new priorities for the order and management of populations. Previously seen as harmless or perhaps even as alternative sources of vision, the mad were increasingly perceived as a menace.

I wonder if the escalating pathologizing of ADHD features could represent a second great phase of this "civilizing" process, if unfocused energies and scattered cognition present challenges to a logocentric society that are more subtle than those of mania or psychosis, but ultimately intolerable nonetheless. As Hanna Rosin straightforwardly argued in The Atlantic, the culture and the economy increasingly valorize and reward calm, structured, meticulous, and persistent verbal order, all of which may be more commonly found in women, on average, than in men.

It is not only the case that expectations for order are higher, that there are far more moving pieces, so to speak, in a post-industrial information society. Technology also amplifies any specific potential source for disorder, via means such as automobiles, firearms, the Internet, or in the case of rogue terrorists, nuclear or biological weapons. It is like not only building a bridge far longer than has ever been attempted before, but also in unprecedented water and weather conditions. The cognitive inefficiencies of ADHD, which often of course entail great creativity, may come to be a cultural luxury for which we have to fight to maintain space (the playgrounds and natural parks of human cognition perhaps).

Sunday, August 8, 2010

Knowing and Being Known


"There is nothing so practical as a good theory."

Kurt Lewin (?)


I can't resist commenting on Daphne Merkin's New York Times article on her (mis)adventures in therapy, not specifically because of its implications for her or for psychoanalysis, but because of general issues it brings to mind regarding diagnosis and levels of understanding.

On a human level the piece intrigues mainly with its idiosyncratic portraits of Merkin's successive therapists; taking full advantage of the writer's prerogative, she turns the tables by pigeonholing them (this one is dowdy, this one seedy, this one aloof, etc.) just as they would aspire to pigeonhole her. However, the surprising yet perhaps telling thing is that she doesn't actually document them pigeonholing her, that is, there is almost no discussion of diagnosis beyond the vaguest of terms: anxiety, depression, neurosis.

It is not my role her to speculate on Daphne Merkin, who is a brilliant writer. But any decent clinician is going to have one diagnosis come to mind when she discusses having poor boundaries, chaotic relationships, one therapist who comments on her difficulties with navigating emotional proximity, and at least one episode of severe regression when given free rein to explore her childhood issues. And yet there is no discussion of diagnosis or its extension, prognosis; that is, what pattern exists here and how might it unfold over time?

Merkin seems surprised herself at how little overall clinical effect her perpetual "life in therapy" has had, not least because while some of her therapists were clueless and unhelpful, others were deeply empathic and understanding. Indeed, she felt very understood by and very attached to at least a couple of them, and yet nothing seemed to change for her overall, at least in the way she was hoping for. What is going on here?

This question also came to mind when I read Cheryl Fuller's most recent post on obesity which, while making no explicit reference, followed and seemed an implicit response to my most recent post here. Her eloquent post, decrying allegedly simplistic overgeneralizations, was a plea for deep understanding of the individual experience of obesity. It brought to mind what I see as a tension between empathy and theory (or diagnosis) as ways of knowing that have different but complementary purposes.

Empathy is, of course, a fine-grained attunement to an individual's emotional state and history, an engagement with a truly unique sensibillity and life trajectory. This is obviously a form of knowledge, and may be likened to other forms of perception that are sui generis: a singular Picasso, or a sunset whose precise configuation of color and shadow will never be precisely repeated anywhere or at any time. Human beings have a powerful need to understand and to be understood in this way, which is the way of love, friendship, ethics, and aesthetics. It is what most people think of as the ends (i.e. the goals) of human life.

However, the whole point of theory and diagnosis is to overlook endless idiosyncratic differences and to identify how entities and processes may be alike, not how they differ. This kind of classification has three purposes: it enables us potentially to know how phenomena may develop over time, how we may go about modifying them, and on a more abstract level, how "it all fits together." The former two aims are those of science, while the latter aim belongs to spirituality. These things are necessary because a universe of irreducible uniqueness is also a universe of chaos (and also a universe without language, which also intrinsically glides over differences; a language could do full justice to individuality only if it had as many words as there are entities).

A good theory (or diagnosis in medicine) should either either lead to effective interventions or to a recognition of the necessary limits of treatment, i.e. prognosis. A theory that aids in neither altering nor predicting outcomes is useless. But there are different levels of explanation with corresponding different levels of treatment. For instance, we lack an ultimate theory of obesity, that is, an explanation of who gets fat and why, but we have a more proximate theory: obese people become obese because more energy is absorbed through their stomachs than their bodies expend as energy. Therefore one can intervene at that level of explanation through bariatric surgery (the ethical and cultural considerations of which I'm not touching here).

To come back around to Daphne Merkin's quandary: as presented in her article at least, the diagnostic/theoretical system has failed her on multiple levels. For while psychoanalysis has never been accused of neglecting theory, and the practices she describes imply a theory, this is never made explicit to her. While there may be passing and perceptive interpretations, no therapist ever comes out and says what may be wrong with her.

There is also a problem with prognosis. If I see someone who has seen a dozen other doctors over decades without success, then there are three possibilities (listed in ascending probability): the diagnosis is wrong, the diagnosis is right but not every possible treatment has been tried, or the condition is untreatable. Merkins's therapists never seem to consider that the diagnosis may be one that is not amenable to classical analysis. Or even if they consider this option but reject it, they are undeterred by the failure of a dozen of their colleagues in the past. Why? Perhaps because in psychoanalysis idiosyncrasy plays such a primary role that there are as many different treatments as there are individual therapists. The last option is that her therapists know that the treatment will not work in any conventional sense but view it as a kind of palliative care. This seems to be Merkin's own take on it; by the end she is not hopeful of any real progress, but life in therapy seems at least slightly less intolerable than life without.

So medicine and psychiatry need both levels of understanding: the empathic and the theoretical. If the latter unopposed is crude and callous, the former unopposed is static and ineffectual. Theory enables us to manipulate the world (including our own bodies) to our own ends, while empathy enables us to decide what those ends will be. When it comes to psychiatry, as I have written here in the past, flaws in treatment are far less grievous to the profession than flaws in diagnostic understanding. As perpetual debates over psychoanalysis and the DSM-5 demonstrate, our map of the human psyche still has wide swaths of empty space, offering limited guidance to those lost on the way. Our minds so often rush to treatment options (how do we get there from here?) that we often skip a crucial orientation step (where are we exactly?). Psychiatry will be waiting for its GPS for a long time. When we finally get it, let's not become overly dependent on it...

Wednesday, August 4, 2010

If Thine Eye Offend Thee...


"Has it ever struck you that there's a thin man inside every fat man, just as they say there's a statue inside every block of stone?"

George Orwell


As the national obesity rate continues to creep higher, Cheryl Fuller at Jung at Heart speculates about the deeper meanings, if any, of obesity. As she notes, there is a natural human craving for a smoking gun, a prime mover: a gene, an archetypal childhood experience, a cultural imprint, anything. It is a craving not to be satisfied, as it increasingly appears that weight is a complex result of human identity, no easier isolated and explained than, say, intelligence. It is the outcome of natural (and individually variable) gratification, energy expenditure and environmentally available calories.

Fuller notes that the inferred role of volition is central to obesity, and this combined with its unavoidably public aspect, makes it a virtually unique target of social judgment in our society. Most other objects of discrimination are either agreed to be unchosen (race, gender) or can be more or less concealed (sexual orientation, substance abuse). As a slender person who was fat through adolescence, I have always felt like a bit of an oddity in the great obesity debate. So I thought I would share how my attitudes toward food and lifestyle have developed over time. Crucially, I claim no personal merit or superiority for the experience--it could as easily be said that the weight was lost for me (by developmental genetic change, etc.) as that I lost it and kept it off. And indeed, as I think back on it now, I think that it was not so much an exertion of willpower as it was taking steps to minimize the need for willpower. Or I may have just "grown out of it." But aspects of the process suggest to me what might have to happen for a person to beat obesity.

My family history for obesity is I suppose moderate; some have had it (not morbidly), some haven't. I was basically born fat and remained that way to varying degrees until around age 17. I was not morbidly obese--certainly there were bigger kids--but it was significant enough to affect juvenile social relations, self-esteem, athletics, etc. Especially in my early teen years I went through many diets that were miserable and only transiently if at all successful. But I loved food dearly, a wide variety of foods, and it was painful to deprive myself of things I loved. I heartily disliked exercise, not least because I was out of shape.

When the change came, it came in stages and not as the result of some conscious plan. The first step was my first job at age 15, a newspaper route that, due to the inconvenience of stopping the car every 50 feet, I did on foot and, increasingly, at a run. Suddenly I had daily exercise that I did not have to force myself to do; willpower was removed from the picture. For while I loved food, I also loved having spending money, so I effectively had no choice but to exercise.

I had the paper route for only a couple of years, but I tried to adopt attitudes toward physical activity that would seem automatic, not requiring constant deliberation (the dreaded willpower). I take elevators only to ascend or descend five floors or more. Unless frankly fatigued, I try not to sit when I can stand, and not to be still when I can pace and fidget. I have never much cared for running, swimming, or biking, but found in walking a daily activity that suited my meditative disposition and that eventually came to seem indispensable. I am restless and uncomfortable when unable to take at least a short walk in a day.

But in terms of my psychological stance toward food, I think the more far-reaching change was that, somehow, food ceased to be for me the source of gratification that it once was. This is captured by the cliche "Eat to live, don't live to eat." In a process that seemed to be unconscious at the time, I "decided" that food would no longer be a major source of pleasure to me. I decided that certain foods (pancakes, doughnuts, elaborate desserts) would be largely off-limits to me--they became gratuitous, no longer worth the risk. It helped when, a few years later, I became vegetarian for ethical reasons, a change that in itself dropped the last ten to twenty pounds that I needed.

This has not been culinary asceticism per se. I still love chocolate, ice cream on a summer day, fresh bread, etc. But I no longer relish these things in the sense of arranging my life around them, that is, they are incidental. I will pick them up when convenient, but I don't go out of my way for them and do not relish them in the way that I might relish a piece of music or a book. Food beyond that needed for sustenance is just not a significant part of my life.

As I said, I do not claim any particular merit for this or any implications for any other persons; I merely describe how it seems with me. It is not something that I take smug pleasure in. The point is that it does not require prodigious or prideful effort; it flows naturally. It is well known that genes are not once and for all, that is, they wax and wane throughout the life cycle unpredictably. Perhaps as an adult I merely enjoy some genetic dispassion for high-calorie foods, whereas as a child I suffered the opposite. Certainly I have other vices--on any given day it would be easier for me to forgo food as opposed to caffeine or Internet access.

Alternatively, as Cheryl Fuller might speculate based on her blog post, perhaps it became psychologically intolerable for me to remain fat, such that the joy of eating, long gone from my life, was not too high a price to pay. I think that many people, and certainly not only the obese, delight in food as one of the basic pleasures of animal life and are not willing to give that up. So when a person desires thinness but does not achieve it, it reflects not weakness, but an unwillingness to pay the often steep price demanded. There is a major trade-off involved. And people should not be blamed for their choices (unless they expect others to pay the costs of those choices). The complication is that what appears as a choice may not always be so (does a nicotine addict "choose" to keep smoking? yes...and no). The unsuccessful dieter thinks that he can deprive and exert himself for a few months before reverting to the status quo, when the reality is that he must alter his basic worldview: he must fall out of love with food.

It is often pointed out that eating cannot be considered a true addiction because it is not possible to abstain from eating. That is true, but it is possible to abstain from a certain degree of gratification in eating. To my mind, this is the kind of lifestyle overhaul required to beat obesity, analogous to the alcoholic avoiding bars or hard-drinking friends. While this is a matter of choice, it is not a simplistic matter of weakness vs. strength or willpower. Willpower is the alcoholic sitting in a bar all evening long and not taking a sip--no one expects that to work. One cannot avoid food, but one can avoid food as pleasure. Just as an alcoholic must look at a bottle and see poison, someone wishing to lose weight must look at food and see not a sumptuous feast, but rather a necessary evil. And obviously it is crucial to construct compensatory gratifications, whether sensuous or intellectual.

Tuesday, August 3, 2010

The Mumbo Jumbo Men


"Well, if it's a symbol, to hell with it."

Flannery O'Connor (of the Eucharist)


According to a local story, notoriously nefarious atheist Daniel Dennett's latest kick is, apparently, publishing case reports of religious hypocrisy (once again, the consistency of human behavior is shockingly cast into doubt). A United Church of Christ campus minister at Duke does not believe in "the cosmic guy in the sky," instead holding, according to the reporter, that:

God is a process of mysterious cosmic creativity that makes for greater love and justice. He thinks of God as a force working within human beings and nature, and he sees his role as trying to imitate that divine character whose greatest exemplar is Jesus.

For better or worse, I'm a pretty abstract person, but I've always been at a loss to understand what this sort of thing means. Is it Spinoza's pantheism, a congruence of nature and God? Or is it more like The Force? (There is deep wisdom in popular culture if one only knows where to look for it). Is the "cosmic creativity" also behind, say, evil (no creation without destruction)? Or is that just what makes it so mysterious?

To paraphrase something Stanley Fish once wrote, I've never been sure that religion is something that one can subtract God from and still have something left. That's why I'm not convinced that Buddhism is a religion--a profound and culturally powerful tradition of thought and practice, yes, but not a religion.

Once the possibility of the supernatural drops out, call it a fellowship, or a philosophy seminar, or a therapy group, but not a church. In fact, the minister described in the story is essentially doing pastoral therapy, that is, trying to respectfully manage belief systems in a way that is beneficial by secular standards.

And yet...there is an undeniable sense of the sacred and sublime, experiences that must be psychological, yet cannot be viewed as merely psychological. That is, they denote not the supernatural, but their essence vanishes when fussily isolated and manipulated by instrumental reason ("Let go, Luke!"). Art, love, the natural sublime, ethical practice, and (the absence of) God fall in this category of willed acceptance (the acceptance of the need for acceptance as it were), the willing suspension not of disbelief per se, but of reductionism and radical skepticism. The sacred is found where Reason averts its gaze, not for the sake of gratification or chaos, but of a greater good. It is the paradox of the mind trying to stay out of its own way, of glories glimpsed only out of the corner of the eye.

Monday, August 2, 2010

Whatever You Say, Doctor


"The brain that is innately fearful and angry has been selected for by evolution."



"My holy of holies is the human body, health, intelligence, talent, inspiration, love and absolute freedom--freedom from violence and falsehood, no matter how the last two manifest themselves."

Chekhov


Two years ago tomorrow appeared the first post of this peculiar A. P. blog; 361 posts later, it still seems as good a rationale as any. Happily, I have little more to say about the activity of blogging itself--it is an intellectual hobby, full stop.

I am forever casting about for serviceable metaphors for what it is that psychiatrists do. Well, psychiatrists are doctors--what do they do? Arguably they have a dual role, one scientific/technological and the other dramatic/emotional. Taking the latter first, the clinical encounter is a carefully scripted and staged act of caring which grounds the endeavor in the aim of healing rather than, say, exploitation. And yet it cannot be only drama; medicine functions legitimately only if there is real technical know-how beyond the layman's scope that doctors are privy to. The two act in concert to produce a clinical outcome.

It is the technical basis that Daniel Carlat has, to some scandal, questioned in his recent book Unhinged. If the science of medicine comprises diagnosis and treatment, both come under intense scrutiny in that volume. As the "debate" over DSM-V has grown into the kind of ad hominem free-for-all that would seem typical of the U. S. Congress, psychiatric diagnosis is considered by many to be as much hearsay as real science. And as alleged technical breakthroughs like vagus nerve stimulation and transcranial magnetic stimulation have turned out to be relative duds, some of psychiatry's most powerful somatic treatments remain at the level of mid-20th century technology. Carlat concludes that psychiatrists should no longer have to be physicians by training--the implication is that our technology, such as it is, does not justify it.

Undeterred, Henry Nasrallah, M.D. extols the "futurology of psychiatry," laying out the technical revolution that is just around the corner--we have been hearing that for 25 years, but this time it's apparently for real. I think of a sonorous voice from my childhood: "Gentlemen, we can rebuild him; we have the technology." But while other physicians are replacing knees and stenting coronary arteries, psychiatrists are...prescribing Valium. But Lee Majors looking mellow for an hour wouldn't have made for much of a show--even in the 1970's.

Granting the profession's shortcomings yet decrying the infighting and self-doubt, Ronald Pies advocates the "prescriptive bond" as the essence of the medical mission of psychiatry (one that, presumably, could not be performed by psychologists with prescribing privileges). Basically, he is writing about the drama of the white coat, that is, a seemingly trivial prescription has the history and authority of several millenia of medical tradition behind it. The caduceus. The Aura of the Doctor. And yet what good is that if the prescription is for sugar pills, or for something that may do more harm than good?

I think of the psychiatrist as a weird hybrid of the philosopher, the priest, the legislator, and the pharmacist. All of these must grapple with inherently ambiguous and contentious issues corresponding to crucial human needs. All must deal with the man on the street, whose opinions on these matters often exceed all in intensity if not in wisdom. But what ensues when the community loses faith in the shaman is...debates like these. Everyone feels he knows best about his body and what to put into it, about right and wrong, about God, about what policies to enact. And yet society sees fit to appoint "experts" for these roles not really befitting the title of "expert." Psychiatrists should not be glorified druggists, but those with enough perspective and wisdom to usefully frame human suffering. In that sense, pace the futurologists, there will be "nothing new under the sun."