Dr. Grossi's Blog
On the Op-Ed page of the August 15, 2010 New York Times, Allen Frances wrote under the title "Good Grief" that the proposed DSM 5 would medicalize normal grief and lead to overdiagnosis and overtreatment of normal grief which would be diagnosed as Major Depressive Disorder. I know from other publications as well as interviews that Dr. Francis, former chair of the Dept of Psychiatry at Duke and chair of the committee that was responsible for the production of DSM-IV, believes they overreached in that manual leading to an epidemic in diagnosing and thus treating ADHD, childhood bipolar disorder, and autism. He has alluded to the bonanza that the pharmaceutical industry reaped from these decisions. Without doubt ring fencing a particular set of symptoms and labeling them with a diagnosis can be problematic when the phenomenology is not bound to the underlying genotype and chemistry. That is not currently available for psychiatric illnesses though progress is at hand as alluded to in my blog, The Future of Psychiatry.
Life is a series of attachments and detachments in the form of losses (bereavement, divorce, bankruptcy, rape, loss of health or body limbs, etc.), and a mourning process is natural and healthy. DSM-IV carves out bereavement as a normal process as long as it is less than eight weeks. The process involves a complex latticework of psychological and physiological responses to the memory of the lost one. These responses span a period of time and are taken up seriatim in a manner similar to examining each piece of a jigsaw puzzle and gradually assembling it until it is complete and can be put away on a shelf. I have seen this take place in as little as four weeks and as long as sixteen weeks. When the period exceeds ten weeks, it is often difficult to distinguish it from a Major Depressive Disorder. The different forms of bereavement and its differentiation from Major Depressive Disorder go beyond the scope of the brief article.
It is clear to me that Dr. Frances has strong feelings about psychiatry changing diagnostic standards and overdiagnosing so as to reduce the area of normality and increase the number and amount of treatment rendered. He talks about labels causing problems securing a job or insurance (or security clearances) and using medication could produce improvement either by placebo effect or neurophysiological effect. He labels such pills "useless pills."
I view this last statement as ill-advised because the current Zeitgeist is one of looking for conspiratorial activity by the pharmaceutical industry whose image has been damaged by revelations of financial conflicts of interests on the integrity of the medical profession and academic medical centers, as well as publication bias in which failed trials are not published. Another point is that pills that produce a placebo effect are in fact producing a result. This should be considered in light of the fact that most depressed people get their care from a PCP. If there are sixteen million depressed people at any one time in the United States, then six percent placebo response is almost one million people and twelve percent is almost two million - a not insignificant number. We should also keep in mind that the FDA requires only two positive trials and ignores the failed trials. Journals are also reluctant to publish failed trials.
When the individual migrates into the land of Major Depressive Disorder, he/she should be treated with medication as one modality. Let us not forget that depression, under-treated in the short-term and long-term, produces a staggering amount of disability and is a brain killer. Depression has been shown to be associated with decreases in brain-derived neurotrophic factor (BDNF) and increases in glucocorticoids, inflammatory cytokines, and oxidative stress. Every new depressive episode increases the risk of additional episodes leading to chronicity. Long-term prophylaxis is recommended after two episodes that have each lasted more days than not for two weeks.
Finally, a few comments about antidepressants. In the January 29 cover article in Newsweek, "The Depressing News About Antidepressants." the idea that antidepressants don't work or don't work better than placebo was discussed. That conclusion was based on a January 6 article in the Journal of the American Medical Association which concluded that the benefit of antidepressants when compared to placebo was greatest as the severity of the depression increased and that they were minimally effective for patients with mild or moderate symptoms. The popular press misunderstood the meaning of this article because they misunderstood the meaning of the placebo effect. In clinical practice you can either prescribe a drug, do psychotherapy, or do nothing and wait. Doing nothing and waiting is not the equivalent of a number of sessions in a placebo controlled trial. Several researchers have also commented that the placebo response has increased in recent years due to study design and the study subjects not falling on a random distribution curve.
Lost in all this controversy were some very hard findings about antidepressants. They have been shown to increase neurogenesis and neuroprotective factors such as BDNF, prevent stress from decreasing BDNF in the hippocampus (short term memory), prevent decrease in hippocampal volume, and show positive effects on longevity and medical health.
It seems to me that researching the genetics, neurobiology, and neurophysiology would be much more productive that arguing about where certain diagnostic lines should be drawn. The research will draw the line. Hum I wonder what the Chinese are doing right now?
In the June issue of Behavioral and Brain Science, Heine, Henrich, and Norenzayan published a disquieting article that has the effect of undermining the confidence of readers of psychological research. Psychologists very often do their research on willing undergraduates at universities. The drawback to this approach that they elucidate in the article is simply that this pool of subjects are overwhelmingly from Western, educated, industrialized, rich, and democratic cultures (WEIRD). The authors argue that this group is not representative and that making claims about human behavior based on them is likely to be incorrect. They show that WEIRDos are in fact unusual when compared to adults and children in other societies.
It has always been assumed that there are some minor cultural deviations but that fundamentally groups are representative. This paper challenges that core assumption by showing statistical differences in visual perception, conceptions of self, reasoning styles, and self-concepts. They show the Muller-Lyer illusion and then discuss the statistical difference in the perception of the length of the lines in children and adults from sixteen different cultural areas. Generally in Western industrialized societies one line looks shorter that the other, however, in smaller societies the illusion is less powerful.
In 1967 Jones and Harris co-authored a paper which led to the idea which was later to be called fundamental attribution error. Subjects were asked to rate people who spoke in favor of Castro and those who spoke against him. Naturally they rated those that spoke in favor of Castro as having positive feelings toward him and those who spoke against him as being negative toward him. When told their positions were determined by a coin toss, they did not change their ratings. In other words they could not change their belief that the speakers were expressing their internal feelings. This was later called fundamental attribution error which is defined as granting too much weight to internal value or personality factors and too little weight external or situational factors in explaining the behavior of others. Heinrich and colleagues point out that this is a lot less true outside of WEIRD societies.
While there are some domains where small-scale societies are similar to large industrialized societies, there are other domains where there are differences e.g., the importance of choice, independent/interdependent self-views, analytic versus holistic reasoning, and self-enhancing biases of Westerners. The non-WEIRD societies are much less analytic, exercise more holistic reasoning, do not see themselves as exceptional and emphasize choice to a lesser degree. Indeed, the authors point out that American are outliers when compared to other industrialized countries and American undergraduates are even outliers within their own group.
Will this data remain valid OR over time will smaller non-industrialized societies change to reflect WEIRDo atitudes?
Premack and Woodruff''s 1978 article "Does the chimpanzee have a theory of mind?" raised this question when they demonstrated that chimpanzees and possibly other primates could read intentions. Subsequent findings showed that primates are quite sophisticated and can form alliances, deceive, and bear grudges (sound a little like Othello?) They can even tell what other chimpanzees can and cannot see. Still proof that they have a theory of mind is incomplete. Theory of mind is defined as the ability to attribute to oneself and others intents, desires, feelings, knowledge, deceptions, and beliefs that are separate and divergent from one's own.
Much research has been centered on the false-belief testing, often called the 'Sally-Anne' task. Sally has a candy that she puts in a basket and then leaves the room. While she is out of the room Anne takes the candy out of the basket and places in into a box. Anne is then asked where Sally will look for the candy when she returns. She passes if she responds the basket. Until the age of 5 or so children fail the test and say the box. In order to get this problem right the child has to perform a mental feat which is to understand Sally's intentions and beliefs, whether accurate or not, and use that to predict the action. Most children with autism cannot pass this test. We are the only species that can infer what someone else is thinking. How do you get a thought from one person's brain to another person's brain?
Since children from all parts of the world acquire this ability at about the same age and with comparable developmental landmarks, that suggests that it is a separate adaptation and not part of general intelligence. Some psychologists theorize that this ability is important in the evolution of language because learning words is much easier when you know what your parents are referring to or trying to teach.
Other experiments have been done. Helen Gallagher designed an experiment based on the game "rock, scissors, paper." Rock beats scissors, scissors beat paper, and paper beats rock. Subjects were put in a scanner and told either that they were playing against someone else or a computer. In follow up interviews, the subjects who thought they were playing against a human disclosed that they tried to figure out the opponent's strategy. The scans showed activation of a small area above the eyes known as the paracingulate region. This suggests that this region is involved in separating one mind from the other, our beliefs from the beliefs and intention of the other. More recent research on the neural basis of theory of mind has diversified and focused on areas of beliefs, intentions, psychological traits, animations, attentional reorienting, and false understanding and these have implicated a variety of other brain areas. Discussion of these research efforts go beyond my effort here.
From an evolutionary standpoint the involved neural networks were probably favored by natural selection because understanding and predicting predators' behavior would have definite survival value and thus increase the fitness of those making the most accurate predictions. Understanding mental states is the best way to predict what the other will do next. The evolutionary value of self-reflection is unclear unless the comparison of self to other enhances accuracy.
Check out Rebecca Saxe ...