The term fear is firmly entrenched in both our technical vocabulary and the vernacular. It
would
serve
little purpose to offer a precise definition for the present discussion, but some consideration of the
ways
in which the term has been defined may be useful. For research purposes, conditioned fear typically
is defined operationally in terms of the environmental events that set the conditions for fear,
although
some behavioral or physiological index is frequently used to corroborate the effect. Irrespective
of the
precision of this operational definition, the ultimate goal of the research (and, indeed, the choice
of
parameters that make up the operational definition) is to provide an experimental model that
parallels
the human conditions listed above. This remains a distant goal, but major developments within
the
areas of experimental psychology and psychopharmacology allow a reasonably coherent model to
be
presented.
The terms conditioned fear or learned fear underscore the importance of experience in
determining
the
sources of fear. Some theorists claim that all fears are learned during the lifetime of the
individual. Even
those who argue for innate fears present a very short list of exemplars (e.g, fear of snakes, fear of
unsupported heights). In virtually all cases, the fear is based upon some consistent relationship
between
the environment and some painful or otherwise noxious stimulus. It is the understanding of these
environmental relationships that provides us with the experimental procedures to study fear and
anxiety.
The earliest experiments, and those which have become most important in our understanding
of
fear,
were done by Pavlov (1927). His reasons for conducting these
experiments were not to learn about
fear and anxiety, but rather to develop the laws for learning about environmental relationships.
An
important distinction was the one made between the unconditioned response (UR) and the
conditioned
or conditional response (CR) of the organism. The UR is the direct response that is elicited by the
noxious stimulation. Examples put forth by Pavlov include the defensive salivation in response to
the
sour taste of acid, leg flexion in response to foot shock, and other motor responses to intense
physical
stimuli such as a pin prick. Pavlov recognized the importance of the psychic (i.e., emotional)
component of this direct response to the strong stimulation and, more importantly, the ability of
this
emotional component to move forward in time and anticipate the occurrence of the painful
stimulation.
Pavlov studied this phenomenon in considerable detail, but the three paradigms shown in Figure 4-1
demonstrate the most important principles that he developed:
Delay conditioning
The delay conditioning procedure can also utilize long delays with somewhat different results.
In
this
procedure, the CS is gradually presented for longer and longer periods of time until there is
eventually a
long delay between the onset of the CS and the occurrence of the painful US. Pavlov found that
the
animals not only could bridge this gap in time, but ultimately were able to appropriately delay the
occurrence of the CR until it just preceded the arrival of the US.
Trace conditioning
The trace conditioning procedure involves a brief presentation of the CS, a period of time
during
which
no stimulus is presented, and then the presentation of the painful stimulus. Under these
conditions, the
anticipatory responding is slower to develop and more fragile, but the success of the procedure
provided the necessary demonstration that the conditioned response could be based upon the
memory
(trace) of a previous stimulus.
In summary, Pavlov developed the experimental procedures to study the important facts that
emotional
behaviors such as fear do not require the actual presence of an aversive event, but can be
triggered (in
a lawful fashion) by events that have reliably predicted the occurrence of aversive events. As we
shall
see later, it is this separation in time of emotional behavior from the actual events that elicit the
original
response that forms the basis for the development (and treatment) of stress disorders.
Instrumental
Conditioning
Pavlov's experimental procedures involved the physical restraint of the subjects, thereby
limiting
the
types of anticipatory responses that could be made. The experimental procedures that evolved in
the
United States involve much less restraint and allow more global response patterns to be emitted.
One
of the more common parallels to Pavlov's unconditioned response places the subject (usually a rat
rather than a dog) in a long, narrow alleyway that has an electrified grid floor (see Fig. 4-2). The
subject can escape from this painful stimulation by running to the opposite end of the alley and
stepping
into the non-electrified goal area of the box. Note that the subject's behavior is
instrumental in
escaping from the aversive stimulation. Learning is evidenced by progressively faster running
speeds.
Avoidance learning
This simple escape procedure is typically modified to include an initial warning signal (i.e., a
CS)
that
allows a brief period of time to reach the goal area before the shock (US) arrives. Thus, the rat
can
either avoid the shock by traversing the alley during the presentation of the CS or, failing that, can
escape the shock that follows several seconds later. For the philosophically myopic, the ability of
the
rat to rapidly learn to avoid the impending shock so readily was a problem: Was the rat
performing
some behavior that was based upon some future set of events? Well, of course it was, but the
acceptance of this notion was aided greatly by the proposal of the so-called two factor theory of
avoidance behavior.
Two factor theory
Some of the early experimentalists saw the distinction between classical and instrumental
conditioning
as being too arbitrary, and suggested that instrumental conditioning may include a component of
classical conditioning (e.g., Mowrer, 1939). The two factor theory
suggests that avoidance behavior is
based upon a combination of classical (Pavlovian) conditioning and instrumental conditioning.
The
subject first learns the environmental relationships that exist according to the laws of
Pavlovian
conditioning, for example, the onset of a light (CS) is reliably followed ten seconds later by the
onset of
shock (US) to allow the development of anticipatory fear. (Other stimuli such as handling, the
characteristics of the testing chamber, etc., can also serve as CSs). Once this anticipatory fear has
been established, the organism can learn the environmental contingencies that are
based on
the fact
that certain responses are instrumental in terminating either the fear-producing CS or the actual
pain-producing US. Thus, the notion of conditioned fear becomes an important determinant in
the
selection of behavior, and the so-called avoidance responses are actually responses that escape
this
conditioned fear.
Generalized fears
In the simple experimental procedures described above, the conditioned fear plays a straightforward, even positive, role in guiding the behavior of the organism. There are, however, a variety of situations in which the same fear response interferes with ongoing behavior and, as we will see shortly, contributes to the harmful physiological effects of stress. One of the early demonstrations of a learned fear response that is basically nonproductive was Watson and Rayner's (1920) somewhat infamous experiment with Little Albert. A few presentations of a white stuffed toy (CS) followed by a loud noise (US) resulted in a learned fear response that could be elicited by the presentation of the CS alone. In fact, this learned emotional response was elicited not only by the original stuffed toy, but by other similar white furry objects-- a phenomenon called stimulus generalization. Although the procedures and the underlying processes of learning are essentially identical to Pavlov's simple conditioning procedure, the resulting conditioned response seems less adaptive than leg flexion in a restraining harness. The conditioned fear in these types of situations can be maintained for long periods of time, perhaps indefinitely, through interaction with other behaviors. Individuals who have such fears (e.g., phobias) typically adopt behaviors (avoidance responses) that prevent or minimize contact with the fear eliciting stimuli. This not only results in undesirable restriction of unrelated activities, but also allows many situations that are only remotely related to elicit low levels of fear or anxiety in anticipation of approaches to the original stimulus.
Conditioned emotional response This interference with unrelated behaviors formed the basis for another experimental model
which
is
termed conditioned emotional response (CER) or conditioned suppression (cf., McAllister &
McAllister, 1971). In this procedure, some baseline behavior such as lever pressing for food
reward
is established. After stable rates of responding have been attained, a long-lasting CS (e.g, a
90-sec
tone) is presented and terminates with the presentation of a brief intense shock. After a few such
pairings of the tone and shock, the subject will suppress responding during the CS presentation,
even
though the food reward contingency is still in effect, and completely independent of the
tone-shock
pairings. The usual interpretation of this is that the tone elicits a conditioned fear response which
is
incompatible with feeding. This procedure has been used in countless experiments as the
prototype of
situations in which conditioned fear interferes with other, unrelated behaviors. Ironically, the
powerful
influence of the CER situation can be attributed to the actual lack of relationship to
the
lever pressing
that is food rewarded.
Punishment
The punishment procedure contrasts sharply with the noncontingent shock presentation that
characterizes the CER. Punishment procedures specifically deliver an aversive stimulus each time
a
particular response is made, and have the advantage of greatly narrowing the range of suppressed
behaviors, leaving most other behaviors unchanged. Another way of looking at this phenomenon
is that
the behavior per se comes to serve as the CS which predicts shock. Other behaviors do not
predict
shock and, hence, do not lead to the learned fear that suppresses ongoing activity.
Conflict
Punishment procedures are not without problems. To the extent that the behavior in
question is
strongly motivated, the delivery of punishment can lead to a situation of conflict. One of the most
widely used conflict procedures which will be referred to in several cases later, is the procedure
developed by Geller and Seifter (1960). This procedure combines
several
elements of the
experimental situations that have been described above. First, the subjects are trained to press a
lever
to obtain some positive reinforcer such as food, which is usually presented on a variable interval
(VI)
schedule of reinforcement. After behavior is well established, a long-lasting CS is presented.
Unlike
the CER situation, this CS does not signal the actual delivery of a shock, but rather signals the
presence
of a punishment contingency in which every response is accompanied by both food reward and a
brief
shock. This situation provides a clear marker for the punishment contingency, and shock levels
and
food motivation can be varied to maximize or minimize the level of conflict.
Two way avoidance
Finally, it should be noted that conditioned fear can interfere with fear motivated behavior as
well
as
with positively reinforced behaviors. One situation in which this is especially salient is the
two-way
avoidance situation. In this task, a CS such as a light is presented in one end of an alley, followed
by
the delivery of foot shock. The subject can escape (or avoid) the shock by shuttling to the other
end of
the alley. After a period of time, the CS is presented in the other end of the alley, and the subject
must
return to the original location in order to escape or avoid the shock. Thus, there is no actual safe
location, but rather the organism must learn that the CS signals the onset of "local" shock which
can
only be avoided by returning to yet another location in which shock already has been experienced.
Of
particular importance in considering this type of behavior is that increases in the amount of fear
(i.e.,
higher shock intensities) actually slows down the rate of learning (cf., Moyer
& Korn, 1964). The
conflict in this situation interferes with learning to such an extent that typical rats require dozens
or
hundreds of responses to learn the task, while many do not learn at all.
The Human
Condition
The experimental procedures described above, along with many variations, have been used
extensively
in basic research related to learning and the aversive control of behavior. Although it is an
oversimplification, these tasks bear a reasonably close relationship to the various categories of
psychiatrically important fears that may be encountered in the clinic.
The simple conditioning procedure can set the stage for both the normal, benign fears of
everyday
life
and the more debilitating phobias. The distinction lies primarily in the time course and severity of
the
conditioned fear, as well as the object of the fear. In many cases, the conditioned fear response is
only
weakly established and transient, owing to the lack of a consistent relationship with a strongly
aversive
stimulus. Such fears are of little consequence. However, a strongly based fear of a common
object or
situations (e.g, elevators, bridges, cats, etc.) can be maintained indefinitely and even strengthened
over
time, owing to the individual's ability to avoid contact with the feared object.
In situations in which the object of the phobia cannot be avoided, the resulting influence on
behavior is
comparable to that observed in the CER procedure. The fear that results from the presence of the
CS
interferes with virtually all ongoing behaviors. This lack of behavior is not only debilitating in and
of
itself, but prevents the occurrence of behaviors that might normally lead to the extinction of the
fear.
The clinical etiology of compulsive behavior is considerably more complex, but many cases
may
have
their roots in simple Pavlovian conditioning. The disorder is complicated by the interaction of the
learned fear response with overt behavior. Just as the punishment procedure described above is
effective because the behavior itself comes to serve as a CS that signals an aversive consequence,
behaviors that are a part of the compulsive repertoire can serve both to elicit the fear and then to
reduce it, setting up a vicious cycle.
Vague or nonspecific anxieties are perhaps the most common form of debilitating fears. As
the
terms
implies, there is frequently some degree of uncertainty about the actual source of fear.
Furthermore,
these vague fears can build upon themselves, such that individuals begin to fear that certain
situations
may lead to fear. In Pavlovian terms, this would be fear that anticipates the arrival of a CS that
signals
an aversive event. In human terms, it is the "fear of fear itself" that seems to be particularly
dangerous.
The pattern of reactions described above can be elicited by a wide variety of situations, the
major
criterion being a situation that offers real or perceived danger. The physiological changes that
result
have clear, adaptive value by virtue of increasing the likelihood of successfully fleeing or fighting
off the
aversive situation. Indeed, any local folklore contains at least a few anecdotes of nearly
superhuman
feats that were accomplished under the influence of the sympathetic stress response.
General
Adaptation
Syndrome
The beauty of the adrenal stress response lies in the speed with which it prepares the organisms
for
action, but the resulting changes in physiology simply cannot be maintained for long periods of
time.
Hans Selye looked beyond this immediate response to stress and made two very important
observations: (a) Long term exposure to stressful situations can deplete the organism's ability to
maintain the stress response, and (b) The pattern of these deleterious effects is independent of the
source of stress. Selye (cf., 1956) outlined a three-stage progression
of
responses to stress that he
termed the General Adaptation Syndrome:
Alarm, Resistance and Exhaustion. When a stressor is
first encountered, a series of responses is initiated in the autonomic nervous system, the immune
system
and other defenses to cope with the emotional, behavioral and physiological aspects of the
stressor.
This is called the Stage of Alarm. The maintenance of this
reaction to the stressor, which includes
reparative processes such as fever regulation, tissue repair, control of inflammation, etc., is termed
the
Stage of Resistance. In some cases, the stressor
cannot be
successfully countered, and the organism
enters the Stage of Exhaustion. In this stage, the
defenses
against the stressor begin to fail, metabolic
reserves are depleted, there is a general decline in physiological functions, and serious illness or
death
ensues.
One of the most important of Selye's observations was that this is a general response that is
independent of the situation that initiates it. The three stages of the General Adaptation
Syndrome can
be triggered by disease, injury, psychological stress, or some combination of these.
Surgical Shock
One of the common sources of trauma that can initiate the stress syndrome is that associated
with
surgical procedures. Even before the time of Selye, surgeons recognized the dual hazards of their
art.
Death can result either as a direct effect of surgical complications, or as a result of surgical shock
that is
not directly attributable to the success of the surgical procedure. The French surgeon, Henri
Laborit,
became interested in this phenomenon in the 1940's and undertook a program of clinical research
and
observation that was to have far reaching consequences for the treatment of stress related
disorders
(cf., Caldwell, 1970).
Laborit recognized that surgical trauma involved intense activation of the autonomic nervous system. Normally, the autonomic nervous system maintains bodily functions within fairly tight limits, automatically adjusting the organism's physiological needs to fit the ongoing requirements. These routine adjustments are primarily the responsibility of the parasympathetic, or vegetative, division of the autonomic nervous system (see Fig. 4-4). But in times of severe stress, these systems can run amok, producing bodily changes that are counterproductive, leading to the life threatening condition that is commonly referred to as shock. Attempts to treat the stress may, in some cases, contribute further to the stress. Laborit stated this with an eloquence that survives translation:
"In fact, perfect lytics are not yet at our disposal and even if one existed, it probably would be effective only in large doses. In that case, an injection of the drug would increase the stress that, when it attains a certain level, elicits organic defense reactions that are quite contrary to our fixed goals (prevention or mitigation of those exaggerated reactions that defend our invariant inner milieu that guarantees liberty but not always life.)
(trans. by Caldwell, 1970, p. 29)
Laborit was not alone in challenging Cannon's sympathetic model of stress. In a paper that
was
originally published in 1942, Cannon had suggested that massive
overreaction of the adrenal system
could lead to Voodoo death (sudden death that was caused by emotional rather than physical
stress).
The most impressive evidence against this model came from an elegant series of experiments
performed
by a psychologist, Curt Richter, who investigated this curious phenomenon of sudden death.
Richter's initial experiments bore little or no relationship to the stress syndrome. He had
become
concerned that the methodical inbreeding of the albino laboratory rat had rendered it too weak to
serve
as an adequate model subject. He attempted to prove his hypothesis by showing that the albino
rat was
physically weak when compared to its wild, Norway rat counterpart. He developed an endurance
test
that involved swimming in a circular tank, equipped with a sort of whirlpool in the center that
ensured
continuous swimming. The results of the first experiment were somewhat curious: At optimal
water
temperatures, most of the rats swam 60-80 hours, but a few died within 5-10 minutes. Why?
Richter
recalled an earlier observation in which a rat's whiskers (vibrissae) had been trimmed as part of
another
experiment. The rat began to behave strangely, and died about eight hours later!
Now,
Richter
suspected that this might have been related to stress, and clipped the whiskers of 12 rats before
doing
the swim test. Three of the 12 died within minutes, but the remaining nine swam 40-60 hours.
By
contrast, all wild rats tested in the same way died within minutes and many of them
die
even without the
whisker clipping!
Richter searched beyond the superficial aspects of these results. He suspected that this
sudden
exhaustion and death of the wild rats might be related to the Voodoo death phenomenon, as
suggested
by Cannon. The prediction to be made by Cannon's sympathetic model was clear--the release of
adrenaline should cause the heart to beat faster and faster until it no longer had time to fill
between
beats, leading to death in systole (i.e, a contracted heart). The actual results were exactly
opposite.
The heart rate of the wild rats became slower and slower, with the autopsy showing the heart to
be
completely engorged with blood. These results bore all the earmarks of a massive
parasympathetic
response.
Richter tested the notion that this was a parasympathetic response using two pharmacological
procedures. In one case, he administered mecholyl (a parasympathetic mimicker) to the albino
rats.
They quickly acquiesced to the swimming task and sank to the bottom, like the wild rats. In the
other
case, he administered atropine (a parasympathetic blocker) to the wild rats, which prevented the
sudden death in some, but not all of the rats tested. The combination of these results, summarized
in
figure 4-5 spun an irrefutable conclusion: The sudden death
phenomenon was
parasympathetic.
Why?
Richter pursued the emotional causes of this stress syndrome. Is it possible that the normal,
sympathetic response to stress is replaced by a parasympathetic response under extreme
conditions?
The rats' vibrissae provide a major source of information. Lacking this information in a hostile
environment such as the swimming tank, could render the situation hopeless, leading to this
paradoxical
parasympathetic response. But what about the wild rats? Richter suggested that they may also
view
the situation as hopeless simply because (being wild) it is more stressful to be handled, and they
have
never before been in captivity. To test this notion, he allowed several of the wild rats to sink to
the
bottom of the tank. Then, retrieving them from otherwise certain drowning, he placed them on
the table
until they recovered, then put them back in the tank. After a few repetitions of this lifeguard
routine, the
wild rats would swim for many hours. The conclusion, which seems valid, was that the wild rats
learned that the situation was not hopeless after all.
The results of Richter's experiments bring up several important points that go beyond the
analysis of
the
stress syndrome:
1. A behavioral phenomenon can be blocked through the pharmacological blockade of
the target organ receptor. (Atropine prevented the sudden death in wild rats).
2. A behavioral phenomenon can be mimicked or exaggerated through the
pharmacological stimulation of the target organ receptor. (Mecholyl triggered the
sudden death in albino rats).
3. Manipulations that change the perception of the environment can either exaggerate a
behavioral phenomenon (as in the case of shaving the rats' vibrissae) or block a
behavioral phenomenon (as in the case of rescuing the wild rats).
4. The perception of the environment is an important determinant of the nature of
autonomic response to stressors.
Ulcers
Executive monkeys
The hallmark of stress disorders is the formation of ulcers. This condition has become
synonymous
with demanding job situations such as executive positions, and with other situations that involve
daily
exposure to stressful conditions. The superficial reason for ulcer formation is the release of
stomach
acids into an empty stomach. The presence of these digestive juices, along with some local
vascular
changes, lead to the digestion of the stomach lining itself, and can sometimes lead to an actual
hole
through the stomach wall, a perforated ulcer. The real reasons for ulcer formation, however, can
be
traced back to the emotional responses that set the stage for this untimely release of digestive
juices.
Ulcers are far more than a clinical curiosity. They are painful and even life threatening to the
individuals
who are afflicted. Furthermore, they account for tremendous financial losses in terms of
workdays lost
and medical costs. The impact of this disorder has stimulated a great deal of research to
determine the
cause of the disorder and to develop pharmaceutical treatments for the disorder. Obviously, the
best
solution would be to eliminate the conditions that initiate the ulcerative process, and toward this
end,
there has been considerable effort to develop an animal model of the stressful conditions that
cause
hypersecretion of gastric acids.
The cornerstone of this effort was Brady's (1962) so-called
Executive
Monkey study. This study is
important for historical reasons, even though the basic conclusions drawn from the study were,
ultimately, shown to be exactly opposite to current knowledge in the area. Brady trained a group
of
monkeys to perform a free operant (Sidman) avoidance task which required that a lever be
pressed to
avoid shock to the tail. If the monkeys allowed too much time to elapse before pressing the lever,
an
electrical shock was delivered to the tail. The executive monkeys spent each workday sitting in
the
restraining chair performing this task. The worker monkeys sat in a similar restraining chair with
electrodes attached to their tails, but the delivery of electrical shock was entirely dependent upon
the
executives' decisions. If the executive received a shock, so did the worker. Consistent with the
predictions, the executive monkeys eventually developed gastric ulcers and the worker monkeys
did
not. Unfortunately, these results support the wrong conclusions because of a combination of
procedural details and flaws in experimental procedure. We will return to an analysis of these
results
later.
The triad design
The most comprehensive behavioral research in this area has been done by Weiss and his associates
(e.g., 1968; 1981). These experiments, utilizing rats as subjects, have reached conclusions
that
are
diametrically opposed to those of Brady, but at the same time confirm the actual results of those
early
studies. Although the testing procedures have varied over the years, most of these experiments
have
utilized the apparatus and procedures that are shown in Figure 4.6 and
outlined below.
The rats were placed into small restraining cages with electrodes attached to their tails. A
small
wheel,
located immediately in front of the rats, could be turned with their front feet. In the prototype
situation,
there were three testing conditions (a triad) that differed in terms of the degree of interaction each
rat
had with the shock. Although each of these experiments involved many rats, the testing was
always
conducted in triads so that the environmental conditions of the rats were interdependent. The
experiments will be described separately to demonstrate the major conclusions that were reached
by
Weiss' research group.
Control of stressors
The most critical set of experiments involved an assessment of the importance of control over the environment. The test triad in these experiments was exposed to the following conditions:
(a) Escapable: Electric shock was delivered to the rat's tail at random intervals. Once the shock was begun, it was programmed to continue until the rat turned the wheel with its front paws. Thus, the rat had control over the termination of the shock.
(b) Inescapable: This rat's tail was connected to the same shock source as the experimental rat. Although it could turn the wheel, the wheel did not influence the shock. Shock termination only occurred when the experimental animal successfully turned it off. This link to the behavior of another subject is referred to as a yoked control procedure.
(c) Control: This rat was maintained in the restraining cage for the duration of
the
experiment, but was
not exposed to the electric shock.
The results of these experiments were clear: The rats in the yoked control condition
developed
gastric
ulcers, the other two groups did not. Contrary to the results of Brady's experiments, the subjects
that
were in charge of shock decisions were the ones that developed the ulcers. However, if these
results
are described in slightly different language, they seem to make a lot more sense. The rats that had
control (i.e., mastery) over the shock were less stressed than those which were at the mercy of
their
environment.
Prediction of stressors
A second set of experiments extended Weiss' analysis of the conditions that lead to ulcers. In
these
experiments the following conditions formed the test triad:
(a) Signaled: A signal (CS) was presented at random intervals, followed by a
brief,
inescapable shock.
(b) Unsignaled: Again, the rats in this condition received shocks that were
identical to
those received
by the experimental group. The distinguishing feature was that they did not receive the CS that
signaled
the impending shock.
(c) Restrained Only: These rats received neither the CS nor the shock.
The results of these experiments began to support a more general notion of mastery over the
environment. Once again, it was the subjects in the yoked control condition that developed the
severe
ulcers. The experimental animals received the same shock, but apparently the mere knowledge of
when the shock was going to be delivered reduced the stress. They developed very few ulcers.
Presence of Conflict
A third set of conditions begins to come closer to the human conditions that are likely to engender ulcer formation. This is the presence of conflict, which Weiss modeled with the following triad:
(a) Signaled escape: A signal (CS) for impending shock was presented at random intervals, as in the experiments investigating the importance of prediction. However, these rats also had control over the shock, in that turning of the wheel could either terminate the shock or, if it occurred during the CS, actually avoid the shock altogether.
(b) Conflict: These rats were exposed to conditions that were identical to those of the experimental rats, except that on some trials, the wheel turning response itself was punished with electric shock.
(c) Restrained Only: Again, these rats were simply restrained for the duration of
the
experiment.
The rats in the signaled escape condition of this experiment were completely free from ulcers.
As
suggested by the separate experiments above, the presence of both prediction and control negates
the
formation of ulcers. The presence of conflict, however, led to severe ulceration. In some sense, it
would appear to be better to have no control or prediction at all, than to have these available but
inconsistent. Figure 4-7 summarizes these results.
Stressors Revisited
The results of these experiments support a remarkable conclusion: Noxious stimuli are not
inherently
stressful. In all of the experiments above, the experimental group received shock that was
identical to
that of the second group in terms of the interval of presentation, the intensity and the duration.
The
critical factor was not the presence or absence of electric shock, but rather the presence or
absence of
what we might call a particular "interpretation" of the electric shock. Prediction, control, and the
absence of conflict are the three factors that prevent noxious stimuli from becoming stressors.
Why did Brady get the opposite results? The answer lies within Weiss' experiments. Animals
that
are
exposed to shock (even though it is neither predictable nor controllable) will not develop ulcers
unless
the frequency of occurrence is fairly high--an occasional brief shock is simply not stressful enough
to
cause a problem. In Brady's experiments, the executive monkeys were skilled enough to prevent
most
shock from occurring, so the worker monkeys were not exposed to very many shocks. There are
several reasons for the development of ulcers in the executive monkeys. Even though they had
control,
the free operant situation requires constant vigilance, and there is no external CS to predict the
shock.
The sessions lasted for hours and the constant requirement of timing responses to avoid shock is
obviously stressful. Another important factor was that all the monkeys were initially
trained
in the
executive condition, and when about half of the subjects had mastered the task, the remaining
subjects
were switched to the worker condition. This biased selection of subjects made it even more likely
that
the executive group would develop ulcers, because later studies with rats have shown (for reasons
that
are not clear) that rats which learn avoidance responses quickly are also more prone to develop
ulcers.
All of this is consistent with the conditions that lead to ulcers in the human environment. The
prediction
and control of corporate executives is illusory. Although they are required to make decisions, the
environment is sufficiently complex that the outcome of the decisions is uncertain and
occasionally
punished (hence, conflict). It is the menial laborer who has prediction and control by virtue of
simple
tasks, scheduled daily activities, and known outcomes for most work related behavior. Not that
these
individuals are immune to ulcers, but the source of the conditions that lead to the ulcers is more
likely to
be found in the home or social environments of these individuals than in their work places.
The experimental procedures that result in ulcer formation fit into a larger context of
situations that
produce aberrant responding of the autonomic nervous system. The procedures that produce
ulcers do
not appear, on the surface, to be life threatening. When compared to the trauma of either a
surgical
procedure or Richter's swimming task, the lack of prediction or control over electric shock would
seem
to be rather benign. Yet, the common emotional fabric of all of these is the hopelessness and lack
of
control of the environment. It is the behavioral interpretation of the environment (be
it valid
or not)
that leads to an autonomic imbalance in the direction of parasympathetic over-responding.
One of the effects of surgery (or other tissue damage) is the release of histamine, which is also
a
potent
stimulator of some autonomic target sites. (A common example is the redness of the skin that
occurs
through local vascular responses when it is scratched.) This response to tissue damage (which
Laborit
called "silent pain") occurs under anesthesia as well as when an individual is awake, adding to the
complications of surgery. By the late 1940's, several antihistamine compounds had been
developed,
and were being used with some degree of success to control surgical shock. Laborit was
searching for
what he termed the perfect lytic compound--a drug that would stabilize the autonomic nervous
system
and, in a sense, "dissolve" the patients' fears. He was somewhat pessimistic, however, because he
recognized that a heavy dosage of a drug is itself a stressor that can trigger the stress syndrome
(see his
quote above).
Despite his pessimism, Laborit saw hope for a lytic compound in one of the antihistamines,
namely
promethazine. In addition to its effects of stabilizing the peripheral autonomic nervous system,
the drug
also had mild effects on the central nervous system, resulting in a sort of indifference to the
stressful
environment. This indifference was in contrast to a troublesome sedative and hypnotic side effect
that
accompanied many of the other antihistamine compounds. Caldwell
(1970)
relates an instance in
which one of Laborit's patients ran through a red light, even though he was not noticeably drowsy
and
inattentive. Working with a biochemist in a drug company (Specia), Laborit guided the
manipulation of
antihistamine molecules to bring about maximal central activity, irrespective of action in the
periphery.
Finally, on December 11, 1950 the drug that was to launch modern psychopharmacology was
synthesized: That drug was chlorpromazine.
Literally thousands of experiments have been done to test the effectiveness of chlorpromazine,
but
the
acid test in terms of animal experiments would be Richter's swimming test. If the drug is truly
effective
as an autonomic stabilizer, then it should prevent the sudden, parasympathetic death of rats in the
swim
test: It did.
The first patient to be treated with chlorpromazine was a young man who had a history of
agitated,
psychotic behavior. He had entered the Val-de-Grace Hospital in September of 1949 and
received 15
shock treatments. In February of 1951, he returned to the hospital and received 24 additional
shock
treatments (both insulin and electric). In January of 1952, he was given 50 mg of chlorpromazine
and
immediately became calm. After seven hours, his agitation returned, but subsided again with a
second
dosage of the drug. Gradually, the drug's effectiveness lasted longer and longer, and the patient
was
released after 20 days.
It is almost impossible to overestimate the impact of this drug and the related phenothiazines
on the
care
and treatment of psychiatric patients. Prior to the advent of chlorpromazine, psychiatric patients
were
rarely released from the hospital. The chronic, in-patient population was ballooning, and the care
bordered on the barbaric. Straight jackets and restraining chairs were used routinely for the
protection
of patients and staff alike. Electric and insulin shock treatments were common procedure. There
were
no alternatives and the patients were more likely to get worse than to get better. Chlorpromazine
literally freed the psychiatric patients from their bondage. It effectively reduced their fears and
agitation
to the point that restraining devices were unnecessary. The drug was not habit forming and
tolerance
was minimal. Most importantly, the patients were not asleep as they had been with barbiturates
and
other sedative/hypnotics. They retained their ability to interact with their environment, but were
indifferent to the stressors.
With the advent of chlorpromazine, patients went home. As shown in Figure 4-9, their is a dramatic
reversal of the in-patient population beginning in 1952. The savings in dollars has been estimated
in the
billions, and the savings in human suffering is incalculable. The patients were not, to be accurate,
cured.
But the drug allowed them to regain a sufficiently cogent interaction with their environment to be
taken
care of safely in a family setting.
The details of the action of the phenothiazines will be presented more fully in Chapter 6, but it
is
important to consider the development of the drugs at this point because of the impact they had
on the
investigation of the pharmacology of stress. The immediate success of chlorpromazine made drug
therapy in psychiatry a reality, and spawned a major search within the pharmaceutical industry for
more, if not better, compounds. As a result, chlorpromazine is simply the prototypical example of
a
group of chemicals known as phenothiazines, which are sometimes referred to as neuroleptics (in
reference to their autonomic stabilizing effects), as major tranquilizers (in reference to the
Pavlovian
deconditioning effects), and as antipsychotics (encompassing both of the above and the fact that
they
are especially effective in treating this patient population).
The
Antianxiety Drugs (Benzodiazepines)
The success of chlorpromazine in treating psychotic patients led to an intense search for other
drugs that
would have a calming influence, particularly on the fears and anxieties that occasionally interfere
with the
lives of otherwise normal individuals. The phenothiazines were, to some extent, too much of a
good
thing. The emotional flattening and autonomic side effects were reasonable alternatives to
psychotic
episodes, but seemed like a high price to pay for the treatment of patients who were, perhaps, a
little
nervous about their new job. Consequently, the search for new drugs was aimed toward
compounds
that would calm the day-to-day anxieties while having only minor side effects. The most
successful
drugs produced by this effort was a class of compounds known as the benzodiazepines, of which
chlordiazepoxide (Librium) and diazepam (Valium) are the most commonly prescribed.
These compounds are variously referred to by the name of the chemical class, as minor
tranquilizers,
and as antianxiety compounds. They are useful and widely prescribed to reduce the tensions and
anxieties associated with job and family situations, as well as to relieve or prevent associated
problems
such as muscle tension and headaches.
The screening of drugs that are potentially useful in treating stress related disorders virtually
requires
animal models. The financial costs, time requirements, and potential dangers of clinical tests with
humans all require that the initial stages of testing be done with animal tests. As a result, there are
several testing procedures that are useful in categorizing the drugs and to provide further
information
about the nature of the behavioral changes produced by the drugs' actions on the brain.
In the discussion above, it was pointed out that one of the major effects of chlorpromazine
was
something termed Pavlovian deconditioning. The results of animal tests confirm this notion, and
it is
worthwhile to directly compare the effects the phenothiazines and the benzodiazepines on these
types of
tests. In appropriate dosages, chlorpromazine (and other phenothiazines) can reduce avoidance
responding (i.e., conditioned responses to fear), while leaving escape behavior intact (cf., Cook &
Sepinwall, 1976; see Fig. 4-10A).
This selective effect on these two closely related responses provides an excellent initial screen
for
drugs
that are likely to share the antipsychotic effects of chlorpromazine in the clinic. By contrast, the
anti-anxiety compounds reduce avoidance behavior only in dosages that are sufficiently large to
also
impair
escape responding (Fig. 4-10B). This nonspecific effect can be
obtained
by several different classes of
drugs (e.g., those that simply impair movement), so this task has little or no utility in screening for
new
compounds that might serve as anti-anxiety drugs.
There is, however, a task that provides a sensitive screen for potential antianxiety drugs.
These
drugs
seem to be uniquely effective in changing performance in the Geller-Seifter punishment procedure
that
was described earlier. Initially, this test was used to demonstrate the
specific effects of barbiturate
drugs, because these were the most widely prescribed drugs for the treatment of anxiety. The
specific
response to punished responding following barbiturate administration is mirrored by the
administration
of chlordiazepoxide and other benzodiazepines In this test, the animals that have been treated
with the
drug show perfectly normal behavior patterns in the food rewarded portion of the schedule, but
are
markedly different from control animals during the punishment portion. Whereas normal rats will
stop
responding when the signal for shock plus food is presented, rats that have been treated with one
of the
anti-anxiety compounds are released from this suppressive effect and continue their high rate of
responding.
The Geller-Seifter screening procedure is especially important because it discriminates the
anti-anxiety
compounds from other classes of drugs. Chlorpromazine and other antipsychotic compounds are
ineffective in this procedure. General depressants (e.g., barbiturates) or stimulants (e.g.,
amphetamine
or caffeine) of the central nervous system may alter the punished responding, but only in dosages
that
have a comparable influence on the food rewarded portion of the schedule.
The Geller-Seifter procedure is not the only method for screening drugs for their antianxiety
properties.
In fact, this method is so cumbersome and time consuming that its use tends to be limited to those
situations that require an especially rigorous test of a drug. Other tests which are, perhaps, not so
sensitive are much easier to use. For example, chlordiazepoxide will increase the amount of novel
food
that a rat will consume (Poschel, 1960; in Sepinwall
& Cook, 1978). This is apparently not related to
any changes in hunger per se, but rather to the more general response to novel (mildly aversive?)
situations. When a rat is exposed to a novel environment, there is an increase in plasma corticoid
levels. This index of stress can be effectively blocked with administration of minor tranquilizers
(e.g,
Lahti & Barshun, 1974).
Finally, a particularly easy method of measuring the response to punishment has been shown
with
the
consumption of salt solutions. Although rats show a positive taste response to a hypertonic
solution of
sodium chloride, the drinking of this solution is rather quickly limited by the aversive
postingestional
consequences (the animal becomes thirsty as a result of drinking). The administration of minor
tranquilizers will increase the amount of hypertonic salt solution that is consumed (e.g., Falk &
Burnidge, 1970).
From
Laboratory to Clinic and Back
Receptors for Phenothiazines
The pathway from the biochemist's laboratory to the clinician's administration of a drug is not
a one
way
street. Although some of the screening tests may have face validity, there is always a danger that
the
aspect of the drug that causes an effect on a screening test is not always the same as the one that
causes
its clinical effectiveness. This problem can never be eliminated completely, but the level of
confidence
can be raised when tight relationships emerge on the basis of extensive use of the drug in humans.
If a
drug or class of drugs has been used extensively in the clinic, it may be possible to make a direct
comparison between the clinical results and some laboratory screening procedure. In the case of
the
tranquilizing drugs, there are two such relationships that are especially instructive.
As shown in Figure 4-11, there is a very strong relationship
between
the clinical dosage of the various
antipsychotic compounds and the ability of these compounds to replace another molecule
(haloperidol)
from dopamine receptors. The logic is as follows: If the dopamine receptors of a test object are
simultaneously exposed to haloperidol and some other compound, the two drugs will compete for
the
receptor sites. For the sake of illustration, if 100 molecules of a compound that has a strong
affinity for
the dopamine receptor is pitted against haloperidol, perhaps as many as 80 of these molecules will
be
successful in occupying dopamine receptor sites. If a weak compound is used against haloperidol,
then
perhaps only 20 molecules would be successful. In order to get 80 molecules of the weak
compound
into the receptor sites, a higher dosage (in this example, 400 molecules) would have to be used.
(It
should be noted that this test is based on the D1 receptor for dopamine; see Chapter 8 for further
discussion of D1 and D2 receptors.)
In the clinic, the mechanism of action of the drug may not be known, and efficacy is based
upon the
relief of symptoms. Drugs that are weaker must be prescribed in larger amounts than drugs that
are
stronger. When a class of compounds has been given to thousands of patients and dosages have
been
adjusted, then the compounds can be ranked in terms of their relative strength, or
potency.
Note that
this does not necessarily mean that any one drug is better than another, but simply that some
drugs are
more potent than others (the same relationship would hold if a single drug were "watered down"
so that
a larger amount would have to be given to achieve an effective dosage). The observation of
interest is
depicted in figure 4-11 (after Creese et al,
1976). When the phenothiazines are rank ordered in
terms of their clinical potency, the list is virtually identical to that obtained when they are rank
ordered in
terms of their affinity to the dopamine receptor. In other words, the potency of a drug to bind to
the
dopamine receptor is closely related to the potency of that drug to relieve psychotic symptoms in
the
clinic. It takes a hard nosed skeptic to believe that this would occur by chance.
In the case of the minor tranquilizers, a comparable relationship can be shown between the clinical potency of these compounds and their effectiveness in blocking the suppression of punished responses in the Geller-Seifter procedure. As schematized in figure 4-12, drugs that must be given in large quantities to produce the desired clinical effect must also be given in large quantities to change the behavior in the Geller-Seifter procedure.
The relationship of the benzodiazepines to neurotransmitter systems remained elusive for
many
years.
These drugs do not significantly alter the brain concentrations of dopamine, norepinephrine or
serotonin,
although the turnover rate of all of these is reduced. Over the years, the compound known as GABA
(pronounced gabbuh; short for gamma amino butyric acid) has gained increasing respect as a
neurotransmitter. It is present in virtually every portion of the brain, it has consistently inhibitory
effects
by virtue of opening calcium channels (cf., Chapter 8), and it is
probably
the single most plentiful
neurotransmitter in the brain (see Olsen, 1987 for discussion). The
receptor for GABA has been
termed the GABA receptor complex (see Fig. 4-13) and is one of the most interesting developments
in neurochemistry. It would appear that there are three interacting receptors on this site: One of
them
is the primary GABA receptor, which regulates the Ca++ channel. The second is a
receptor that
responds to sedative and convulsant drugs. The third is receptive to benzodiazepines, and their
presence enhances the normal activity of GABA.
There is some possibility that the brain produces endogenous compounds that are comparable
to
the
benzodiazepines. The evidence for these naturally occurring substances is threefold: (a) labeled
diazepam is tightly bound to specific receptors, (b) the rank order of clinical potencies of the
benzodiazepines is highly correlated with the rank order of the ability of these compounds to
displace
the labeled diazepine from these receptors (see Fig. 4-14; after Baestrup & Squires, 1978), and (c)
exposure to stress appears to block the binding of benzodiazepines, presumably because the sites
already have become occupied by some stress induced substance. Furthermore, the rank order of
clinical potencies is the same as the rank order of the ability of the compounds to displace this
labeled
compound from the receptor (cf., Moehler & Okada, 1977; see
Fig. 4-15; after Lippa et al, 1978).
There is a great deal of evidence that the brains systems that are involved with reacting to
punishment
and nonreward utilize acetylcholine as the neurotransmitter (cf., chapter 2 and Carlton, 1963, for
related discussion.) Both atropine and scopolamine (cholinergic blocking agents) alter the
behavior of
rats in a variety of related situations: Behavior that is punished with shock persists. Behavior that
is no
longer reinforced persists. Stimuli that signal a temporary period of nonreinforcement (time-out
experiments) are ignored. Schedules that require low rates of responding to obtain reward (drl
schedules) cannot be mastered. These results have been observed in different laboratories, using
different reinforcers and other testing parameters, and in different species. The conclusion that
cholinergic blocking agents reduce the response to punishment and nonreward is almost
inescapable.
These compounds have also been used in two other situations that seem even more relevant
to the
reduction of stress responses. One of these has already been discussed: Atropine injections
blocked
the sudden death phenomenon in Richter's swimming task. The other involves the two way
avoidance
procedure. Normal rats have great difficulty learning this task, presumably because a successful
avoidance response requires that the rat return to a location in which shock (or a signal for shock)
has
just been experienced. Scopolamine or atropine dramatically increase the ability to master this
task,
presumably because it reduces the disabling response to conflict.
When this discussion was begun, it was asserted that these drugs influence the brain systems
that
control the responses to punishment and nonreward. The evidence for this assertion is strong, but
at
the same time provides a clue concerning the limited usage of these drugs in the clinic as stress
inhibitors. Perhaps the major reason why atropine and scopolamine are not suitable for routine
administration in humans is because they are too effective in the periphery. Recall
that
Laborit had used
scopolamine as a presurgical treatment prior to the development of chlorpromazine, it was
effective in
blocking the strong parasympathetic component of surgical shock. Likewise, this type of
cholinergic
blockade was effective in blocking the sudden death phenomenon in Richter's studies. However,
the
potency of these compounds in blocking the parasympathetic effector organs is itself a liability. In
the
case of diminishing surgical shock or preventing voodoo death, certain undesirable side effects
can be
tolerated. But for routine administration, the accompanying dry mouth, dilated pupils, decreased
gastrointestinal activity and other autonomic effects are undesirable.
The progression of drugs that were used in the prevention of surgical shock provides a
particularly
good lesson of pharmacological principles. Scopolamine and atropine block the effects of
acetylcholine
at the receptors of the actual target organs (i.e., the smooth muscles and glands) of the
parasympathetic
system. In other words, the "command system" of the autonomic nervous system may remain
functional
while the final response is blocked. Laborit went back one step and blocked the action at the
autonomic ganglia with low dosages of curare. This resulted in an autonomic stabilizing effect, by
reducing activity of both the sympathetic and parasympathetic divisions of the autonomic nervous
system. This type of action also had its limitations, because it was, in some sense, masking the
final
stages of a stress reaction that had already been initiated in the central nervous system. Laborit
was
seeking a drug effect that would block the initial stress interpretation in the brain, and
found
this effect
in chlorpromazine. The major point here is that it is preferable to forestall the stress reaction in its
initial
stages than to allow it to develop and then block its effect at some later point along its synaptic
route.
The problem with scopolamine and atropine is not that they lack central effects, but that they
have
both
central and peripheral blocking activities. In fact, there is strong evidence that the major influence
of
these compounds on the tasks outlined above is attributable to their effects upon the brain rather
than
the autonomic effectors. Both of these compounds are amines and in their normal states the
nitrogen on
the side chain has three radicals attached to it and is neutral. These compounds can be
transformed
biochemically by adding a fourth radical (methyl) to the nitrogen leaving it with a positive charge.
The
resulting compounds (called quaternary amines) are commonly referred to as methyl atropine and
methyl scopolamine and have the very useful
property of
being virtually unable to penetrate the blood
brain barrier (cf., Chapter 3.) This property means that nearly all of their blocking effects are
restricted
to the peripheral parasympathetic effectors, while brain acetylcholine systems are left to function
normally (see Figure 4-16).
A typical experimental design compares the behavior of a control group (saline injected) with
that
of a
group injected with standard atropine and that of a group receiving methyl atropine. In virtually
every
experiment that has been done, the results are clear cut: Standard atropine reduces the response
to
punishment, nonreward and conflict, whereas methyl atropine (which has the same or even more
potent
peripheral effects) has no effect on these behaviors. What this means is that the blockade of the
parasympathetic organs plays little or no role in the effects of these drugs on stress related
behaviors.
Virtually all of the effects can be attributed to their action on the brain. In this regard, it would be
very
interesting to know if methyl atropine would prevent the sudden, parasympathetic death in
Richter's
swimming task (it probably would not) and if a form of scopolamine that worked only on the
brain, but
not the periphery, would be a useful drug in the treatment of clinical stress disorders (it probably
would). In any event, we are not yet finished with the role of the autonomic nervous system in
stress
responses, and we soon will see evidence that the peripheral responses are considerably more
important than they were once thought to be.
Treatment of
Ulcers
As in the more acute instances of shock reactions, the formation of ulcers can be blocked or
retarded
by the injection of cholinergic blocking drugs such as atropine. It is not, however, the treatment
of
choice in the clinic for the same reasons as discussed above, namely, side effects. It is one thing
to
demonstrate the effectiveness of atropine by blocking the formation of ulcers in an animal
experiment
that lasts a few hours or a few days. It is quite another to use such a broad spectrum drug over a
period of years in a human patient.
There are two pharmacological solutions to this problem that reflect importantly different
therapeutic
strategies. One of these, which we have seen above, is to counter the stress response at the
developmental stages in the brain. In this regard, the antianxiety compounds are successful in
both
experimental models and in the clinic. Chlorpromazine might also be effective, but because of its
potency is not routinely used for this purpose. Obviously, another even more desirable (and
effective)
approach is to eliminate the environmental conditions in the patient's life that lead to the formation
of
ulcers, but it is not always easy for the therapist to extricate people from their yoked control
situations.
The second pharmacological approach is to basically ignore the stressful situation per se and
very
specifically block the final stage of the stress response at the gastric receptors. As discussed
earlier,
cholinergic blockade is not sufficiently specific, but there is an alternative. Once again, the roots
of this
alternative go back to Laborit's work on surgical shock. He referred to the silent pain of the
surgeon's
knife, recognizing that the tissue damage resulted in a large autonomic response. This was due to
the
stimulating properties of histamine (literally meaning amine
from the
tissues) on autonomic effectors.
Recall that Laborit's search for an autonomic stabilizer centered on antihistamines, but most of
these
compounds had broad actions in both the central and the peripheral nervous systems. Over the
years,
the research that was spawned by these early problems led to the discovery of at least two types
of
histamine receptors, called H1 and H2 (see Douglas, 1980, for
discussion).
Of these, the H1
receptors are far more common, being involved in response to injury, hypersensitivity reactions
(allergies), and other conditions. The H2 receptors are far less common, being primarily involved
with
the regulation of the volume and acidity of gastric secretion (see Figure
4-17). Thus, it is possible to
administer an H2 blocking compound that will block the hypersecretion of ulcer producing
stomach
acids, while leaving most of the remaining activities of histamine unaltered. One of these
compounds,
cimetidine (trade name, Tagamet), has become one of the most
widely prescribed drugs in the world!
William James (1890) proposed an alternative view which, on the
surface, seems totally
unreasonable. The James-Lange formulation proposed that the emotion provoking stimulus
triggered
the autonomic nervous system directly (although there was a provision for central nervous system
involvement), but the actual experience of the emotion lagged behind and depended
upon a
"reading"
of the autonomic reaction. Popular (and overly simplistic) metaphors of this theory proclaim that
an
individual "...is fearful because he is running from a bear", or "...is angry because she hit
somebody."
This notion seems to have confused cause and effect.
Cannon (1927) pointed out a series of problems with James' view
of
the emotional experience: (a)
The visceral response is slow to develop. (b) The viscera themselves are rather insensitive, even
to
physical trauma such as cutting or cauterization. (c) The same response (e.g., an elevated heart
rate)
can be elicited by fear, running around the block, or falling in love. (d) Patients with spinal
injuries that
lead to paralysis and loss of bodily sensations experience full emotions. (e) Injections of
adrenaline do
not result in emotional experiences. At the time of this argument, Cannon was perhaps the
ranking
physiologist of the world, and William James was merely a gifted writer, philosopher, and
psychologist
who was treading on the foreign soil of physiology. Cannon's view prevailed.
Cannon's professional stature overshadowed some of the weaknesses of his objections.
However,
the
weaknesses became more and more apparent as additional information about the autonomic
nervous
system unfolded through the years. It is true that visceral changes are sluggish and slow to
develop, but
so is it true that the full emotional experience is often slow to develop. An all-too-frequent
experience
is the near miss of an automobile accident which almost instantly mobilizes complicated motor
responses, while the full range and impact of the emotions may come seconds, minutes, or even
hours
later. It is also true that the viscera can be cut, cauterized and otherwise insulted during surgery
with
little or no sensation to the patient, but this is a moot point. We certainly can experience the rapid
heartbeat, flushed skin, and butterflies in the stomach during emotional experiences. Cannon's
point
about the origins of an increased heart rate was also weak, in that he failed to recognize the
possibility
that different emotions engender different patterns of autonomic responses (cf., Ax, 1953; Ekman, et
al, 1983; Funkenstein, 1955). Patients who lack the ability to
move or
to feel somesthetic stimulation
of the body still retain a large portion of autonomic sensitivity via the cranial nerves, especially the
vagus
nerve. These patients also report a lack of emotional intensity, feeling "as if" they were angry.
Finally,
the experiments involving the effects of adrenaline injections were incomplete in design, missing
an
important point that even James missed. These studies form the basis for the remainder of this
section.
Schachter and
Singer's
Model
The experiments of Schachter, Singer and their colleagues (e.g., Schachter,
1971; Schachter &
Singer, 1962) have shed new light on the James-Lange theory of emotions. Their results
show
clearly
that autonomic arousal can set the stage for (rather than being the result of) emotional experience,
and
elucidate some of the difficulties that other experimenters (including Cannon and James) have had
in
triggering emotional reactions with adrenalin injections. We turn now to a consideration of some
of
their results.
A typical experimental procedure employed by Schachter and Singer involves the injections of
either
adrenaline or saline (a placebo) and the presence or absence of an emotion provoking situation.
In
each study, the subjects who had received the injections were divided into two groups. One
group
simply filled out a questionnaire that contained some rather pointed items. The second group
filled out
the same questionnaire, but a confederate who pretended to be a subject vividly expressed his
outrage
at the nature of the questions, tore up the response sheet, and stomped out of the room. Post-test
interviews showed the following pattern of results: (a) The questionnaire per se did not elicit
anger for
either the subjects injected with the placebo or those injected with adrenalin. (b) The subjects
injected
with the placebo did not experience anger, even when exposed to the confederate. (c) The
subjects
who had received adrenalin injections, however, were strongly influenced by the confederate and
experienced anger over the nature of the questionnaire.
The interpretation of these results is that the emotional experience requires both autonomic
arousal
and
a relevant cognition about the environment. Extending this notion further, it was proposed that
the
subjects explained their autonomic arousal by attributing it to the anger about the questionnaire,
as
expressed by the confederate. Since the questionnaire alone was a rather mild stimulus, it could
not
provide a sufficient account for the autonomic arousal until the flame was fanned, so to speak, by
the
confederate.
The explanation outlined above would be very tenuous, were it not for the complementary
results
of
additional experiments. One such experiment used exactly the same treatments (adrenaline or
placebo)
and the subjects were asked to fill out a long and tedious questionnaire. This time, the
confederate
rebelled against the tedium of the task and began a high spirited game of basketball, using the
wastebasket and some extra copies of the questionnaire. The pattern of results was the same:
There
was no particular emotion attached to the questionnaire per se for either the placebo or the
adrenaline
groups. Likewise, those subjects who had received the placebo paid little attention to the
confederate.
However, those subjects whose sympathetic nervous systems had been aroused by the adrenaline
were
strongly influenced by the antics of the confederate as revealed by their post-test expressions of
euphoria.
There is a great deal of power in these two experiments. A particular emotion cannot be
ascribed
to
the effects of a drug, the adrenaline. Nor can an emotional experience be triggered by the mere
presence of a mild environmental situation. But the combination of sympathetic arousal and an
appropriate environmental situation can produce a full blown emotional reaction. In the words of
Schachter and Singer, the subjects who have been injected with adrenaline have a state of arousal
that
is in search of an appropriate cognition. These results have been extended in a number of novel
designs, including one in which prior adrenaline injections increased the number of belly laughs
during a
slapstick comedy film. The framework of this theory has even included a naturalistic setting in
which
male subjects who had just walked across a high suspension bridge (presumably providing their
own
adrenaline) rated a female confederate significantly more attractive than males who had not
crossed the
bridge.
A final experimental manipulation provided the capstone for this notion of emotional
experience. If
the
subjects were informed that the drug that they had received was adrenaline and told that it would
produce an increase in heart rate, some flushing of the skin, and a general feeling of arousal, the
emotional experience was forestalled: The symptoms were attributed to the drug action rather
than to
the antics of the confederate or the humor of the comedy.
The results of these experiments add a new dimension to the effects of various drugs,
especially
those
that are designed to stabilize emotions or reduce anxiety. It is clear that the effects of these drugs
could
be either on the central interpretation of the environment (i.e., the cognition) or on the peripheral
arousal
aspects. It is very likely that the autonomic stabilizing effects play an important role in changing
an
individual's interpretation of the environment. Just as the subjects in Schachter's experiment say,
in
essence, that they must be experiencing an emotion because that is the only explanation they have
for
their state of arousal, so is it possible that an individual whose autonomic nervous system has been
stabilized by an antianxiety agent may conclude that the situation must not be anxiety provoking
because
there is no autonomic arousal. Figure 4-16 shows a summary of some of these effects.
2. Pavlovian conditioning procedures show that fear can be evoked by previously neutral
stimuli
that
have been paired with aversive events.
3. Instrumental conditioning involves two factors: Pavlovian conditioning of fear responses
and
learning
of behaviors that are instrumental in changing these relationships.
4. The major response to short term stressors is the so-called flight or fight response of the
sympathetic
nervous system.
5. Longer exposures to stressors can result in the progressively more severe stages of the
General
Adaptation Syndrome.
6. Acute trauma such as surgery can lead to the shock syndrome, a diffuse outpouring of the
entire
autonomic nervous system.
7. The lack of a coping response for acute, profound stressors can lead to sudden death
through
overreaction of the parasympathetic nervous system.
8. The response to stress can be systematically changed by behavioral and pharmacological
interventions.
9. The major forces that lead to ulcers are the inability to predict or control aversive events,
and
the
presence of conflicting consequences (sometimes rewarded; sometimes punished) of behavior.
10. The stress response is more closely related to the interpretation of the environment than
to the
physical intensity of the aversive stimuli.
11. The search for better stabilizers of the autonomic nervous system led to the discovery of
chlorpromazine and related phenothiazines known, collectively, as tranquilizers or antipsychotic
drugs.
12. The benzodiazepines rather specifically reduce the effects of punishment, and are widely
prescribed (e.g., Librium and Valium) as antianxiety drugs.
13. The phenothiazines have a high affinity for dopamine receptors.
14. The benzodiazepines have a high affinity for specific receptors that have not been linked
to the
GABA receptor complex. The presence of these receptors has suggested the possibility of an
endogenous antianxiety compound in the brain.
15. Anticholinergic drugs appear to have excellent anti-punishment properties in animal
experiments,
but because of the peripheral side effects, they have little clinical value in the treatment of day to
day
anxieties.
16. The quaternary forms of atropine and scopolamine have been useful experimentally
because
they
block cholinergic synapses in the periphery, but do not cross the blood brain barrier.
17. Cimetidine (Tagamet) is a very specific blocker of the H2 histamine receptor, and is
widely
prescribed to reduce the gastric acid secretion that can lead to ulcers.
18. Feedback from the autonomic nervous system plays an important role in determining
whether
or
not an emotion will be experienced; environmental cues interact with this feedback to determine
the
nature of the emotional response.