Military and Civilian Perspectives on the Ethics of
Intelligence—
Report on a Workshop at the
Department of Philosophy
Jean Maria Arrigo, Ph.D.
Virginia Foundation for the
Humanities and Public Policy
145 Ednam Dr., Charlottesville, VA
22903
(804) 296-4714; jmarrigo@pacbell.net
Paper presented to
The Joint Services Conference on
Professional Ethics
Springfield, Virginia
January 25-26, 2001
My
JSCOPE ‘00 paper on "Ethics of Weapons Research for Insiders and
Outsiders" argued that insiders—members of the military and
political intelligence community— and outsiders—nonmembers and even
nonsympathizers—must jointly negotiate practicable ethical standards for
intelligence operations. In short:
Outsiders cannot ultimately impose moral constraints on
intelligence operations because they cannot monitor operations. Outsiders would have to breach barriers
designed to thwart enemy intelligence agencies and to override the decisions of
people who are willing to sacrifice their lives for national security
goals. Insiders cannot ultimately
impose moral constraints on operations because, under duress, their moral
commitments to national security goals may override their moral commitments to
military and civilian codes....
But
how can outsiders contribute to a practical understanding of
intelligence ethics? Here the
difference between military ethics and intelligence ethics comes into
play. Apart from law enforcement
officers and prison guards, few civilians employ physical force against enemies
of the public good, so few civilians can bring practical experience to military
ethics. Almost all citizens, though,
have practical experience as intelligence agents. Some familiar civilian intelligence contests are: Internal
Revenue Service versus taxpayers, insurance underwriters and claims adjusters
versus policy owners, lenders and credit agencies versus borrowers, plaintiffs
versus defendants in the judicial system, corporations versus competitors,
marketers versus consumers, educational testing services versus students, and job
recruiters versus job candidates. To
the extent that these enterprises involve transcendent goals, such as fair
taxation, equal access to education, and equal employment opportunities, they
generate difficult moral problems like those confronted by intelligence on
behalf of national security. This is to
say that outsiders to political and military intelligence can indeed bring
practical experience to intelligence ethics.
At
the Claremont Workshop, six insiders and six outsiders presented case histories
of intelligence operations to several ethicists and a few artists.
We construed intelligence operations
in the broad sense of
strategic collection, analysis, and
deployment of information for advantage over adversaries or protection from
adversaries.
Our goals were to search out the
moral parameters for intelligence operations and to seek a common framework for
military and civilian operations, not to establish moral principles or methods
at this early stage of inquiry.
In most of
the case histories, presenters had played morally significant roles, and
operations had involved substantial moral trade-offs. One insider had served as an Office of Strategic Services (OSS)
agent behind enemy lines in World War II and another insider had served as a
military counternarcotics agent in Latin America. One outsider, a principal at a private school, had blown the
whistle on an unethical academic intelligence operation, and another outsider,
a civil litigation attorney, had revealed corporate fraud using a standard
judicial intelligence operation. The
workshop ethicists, drawn from the Claremont Colleges, were philosophers from
various specialties, such as epistemology and naturalistic ethics, but all were
broadly competent in traditional ethics.
The task of the philosophers-as-ethicists was to inquire into a moral
theory for intelligence operations—if possible, a moral theory encompassing
both civilian and military perspectives.
The artists consisted of a poet, a print maker, and a
composer-guitarist. The task of the
artists was to keep an eye on the limitations of case analysis and to expand
the scope of inquiry with later art works.
The ethicists were recruited by CGU Professor Charles Young, whose field
is ancient Greek moral philosophy, especially Aristotelian ethics. As a social psychologist studying
intelligence ethics and as daughter of an undercover intelligence officer, I
recruited the insiders, outsiders, and artists for the workshop. Because of participants’ concerns about
confidentiality, attendance was by invitation only.
The
pilot workshop was built around three workgroups, each consisting of two
insiders and two outsiders, an artist, and one to three ethicists, with one
ethicist serving as moderator. From
1:00 to 10:00 p.m., the workshop spanned an introductory lunch, two
workgroup sessions for case presentations, a working dinner, a plenary session,
and a social hour.
Inasmuch
as some military, legal, and personal matters of confidentiality are still
unsettled, at this point I will discuss the presentations in generalities and
omit names of a few presenters.
III. An Interpretation of Case Presentations
In
the plenary session, speaking for the philosophers, Charles Young identified
what might be called a moderate consequentialist moral dynamic for
intelligence operations. Loosely
paraphrasing the transcript:
A
lot of moral theories nowadays are consequentialist, in holding that morality
requires you to do in any circumstance whatever makes for the best
outcome—however it is you measure good outcomes.... But most of us believe there are constraints, cases or situations
in which it's positively wrong for me to do what makes for the best
outcome. It's immoral. Most of us, for example, probably think
it's wrong for me to take your kid's book and give it to the
library, even though—let's agree for the sake of the point— that would result
in the greater public good....
One
thing that seemed to come out in our sessions is that the constraints on
intelligence practitioners turn out in many cases to be very small. The reason is that the good outcome that
intelligence workers are aiming at—the continued existence of the United States
with its military and industrial capacity and its political systems intact—is
so important a goal that the usual constraints against harming other people are
overridden. The result is that lots of
things happened that just look on their face to be immoral. The higher the stakes, the easier it is to
override the constraints.
Figure
1 schematizes the moral dynamic of this moderate consequentialist position.
Points on the graph represent
operations. The height on the vertical
axis indicates the degree of harm resulting from an operation. The width on the horizontal axis indicates
the severity of the stakes. The curve—the
moral divide —indicates the severity of stakes that entitle operators to
override moral constraints on harms of a certain degree. Points below the moral divide represent
morally tolerable operations, from the perspective of the political constituency
of the intelligence unit (e.g., a congressional oversight committee). Similarly, points above the moral divide
represent morally intolerable operations.
(Of course, harms and stakes both have many aspects and can only be
ranked approximately, at a particular point in history.)
The
case history presented by a high school principal is illustrative. She reported on an academic intelligence
operation at a private school. As a new
principal, she had discovered an administrator's scheme for ensuring straight
A's for a child of affluent parents.
The administrator was a charismatic personality and a successful
fundraiser for the school. The parents
were influential and capable of making large donations to the school. The administrator’s grade scheme might be
construed as a (barely) morally tolerable academic intelligence operation
(point A on Figure 1), considering the high stakes: possible financial failure of the most successful school in the
area to offer children a Christian education.
The principal attempted to correct the situation by backing a teacher
who gave the student an honest B grade.
The administrator responsible for the grading scheme fired the teacher,
sidelined the principal, and surreptitiously changed the student's transcript. The principal believed that it was illegal
to change grades without the teacher’s consent (although the state statute may
not have applied to private schools).
The principal notified the assistant pastor of the church associated
with the school, who passed on the information to the head pastor, who passed
it on to the school board. Preoccupied
with large financial losses due to a fraudulent building contractor, the school
board did not respond. Eventually the
head pastor confronted the administrator.
Through months of apparent espionage and sabotage (e.g., taping
telephone and private conversations secretly and sending tapes around), the
administrator had allegedly laid the groundwork to discredit the head pastor,
rally the school parents against him, and stage a successful coup in the
congregation. This was surely a morally
intolerable academic intelligence operation (point B), as the school board came
to agree and finally dismissed her—with the assistance of the police because
the administrator would not vacate the school premises.
In
a strictly consequentialist model, the moral tolerability of operations would
be tied only to their utility (to be discussed later), not to constraints based
on the relationship between harms and stakes, as delineated by the moral divide
in Figure 1. In this optimistic
version of the moderate consequentialist model for intelligence
operations, as the stakes increase, the tolerable harms never reach the point
of atrocity but only rise asymptotically.
I will elaborate this moral-dynamics graph as an interpretive framework
for the workshop case histories. Then I
will return to sample the comments in the workshop plenary session by insiders
and outsiders, ethicists, and artists.
The Moral Dynamics Graph as an Interpretive Framework
Severe
harms on the part of intelligence operations may be tolerated under threat of
total loss. The prototypical example is
the OSS response to Japanese and Nazi aggression in World War II. The former OSS agent, Tom Moon, said of
operations in North Burma behind Japanese lines: "We had absolutely no compunction about what we were doing
against the Japanese. We were losing
the war at this time, and when somebody's got you up a dark alley and they're
about to cut your throat, ... you'll do
anything it takes to survive—immediately." He described paying 8000 guerilla troops with opium because money
was worthless in the jungle and turning over probable spies, after
interrogation, to the Burmese natives for "release”—then hearing the
reliable gunshots of execution. “Two
of our men were buried alive by the Japanese.
One of their men then got burned at the stake in retaliation. It got that brutal. But word had to get back that we weren’t going
to take it.” Yet Moon and his fellow
agents embraced certain moral constraints.
For instance, they refused to place in Japanese hospitals the
cyanide-laced aspirins supplied to them for this purpose, because Burmese
doctors and nurses or their children, that is, noncombatants, might consume the
aspirins. But the severity of morally
tolerable operations did increase beyond the atrocity level. In the realistic moderate
consequentialist model, Figure 2, the degree of morally tolerable harms rises
without bound as the stakes increase towards total loss, in the perception of
the constituency of the intelligence unit.
For the OSS, the total loss point was Nazi or Japanese defeat of the
United States. Under conditions
evocative of total loss, or “supreme emergency,” operations that result in
atrocities may even be morally tolerable [Walzer,
Michael, 1977, Just and Unjust Wars.
New York: Basic Books, p.
247]. In Figure 2 the moral divide
rises above the atrocity level in conditions of supreme emergency when the
stakes are close to total loss.
Different stakeholders, such as militarists and
environmentalists, may disagree as to what constitutes total loss. For militarists, total loss may mean
military defeat of their own nation.
For environmentalists, total loss may mean destruction of all human life
or of ecological systems. John
Lindsay-Poland, a specialist on U.S.- Panama relations for the Fellowship of Reconciliation,
described his research into secret chemical weapons dumps in the Canal
Zone. The chemical weapons could create
uncontrollable environmental and human disasters. The departing U.S. military cannot clean up the weapons without
acknowledging the problem, which militarists would believe jeopardizes security
interests. For the Fellowship of
Reconciliation, whose primary sphere of moral concern includes Third World
peoples and local environments, the point of total loss is far beyond military
defeat of the United States, as shown in Figure 3. (The harms and stakes axes for militarists and environmentalists
are superimposed on a single graph for a very crude comparison.) In their view, jeopardy to U.S. military
supremacy does not justify overriding major moral constraints, such as the
"provision in the Canal treaties for the U.S. to 'remove hazards to human
health and safety, insofar as may be practicable' by the time the U.S.
withdraws." Severe threats to the
habitability of the planet, however, would warrant overriding constraints.
Moon stated a moral principle
for intelligence operations that his OSS unit had employed in evaluating
"the whole catalogue of 'James Bond gadgetry'" the government had
supplied to them. According to his utility
principle, no harm can be justified unless it is actually believed to be
useful in reducing the threat to security.
Harms that are morally intolerable, however, such as burning enemy spies
at the stake, may be useful if concealed from the constituency of the
intelligence unit. The likelihood of exposure
and, further, censure of secret operations depends very much on the political
power of the intelligence unit. For a
poorly positioned intelligence unit, even operations that are morally tolerable
to its constituency may not be useful, because of the likelihood of exposure
and censure by more powerful factions.
Tashi
Namgyal, a former security official of the Tibetan Government-in-Exile,
presented this example. In most
countries, punishments for treason and sabotage are very severe—long imprisonment
or execution. The Tibetan Security
Department currently functions within other countries and must obey the laws of
their hosts in prosecuting Tibetan agents who betray their compatriots. This results in near immunity for traitors
and saboteurs, in spite of the great risk to the Dalai Lama from collaborators
with the Chinese government. On the
moral dynamics graph in Figure 4, the utility line for a very poorly positioned
(“weak”) intelligence unit rises no higher than the maximum level of harms legally
permitted in host countries, which is well below the moral divide when the
stakes are high.
For
a very well positioned (“powerful”) intelligence unit, operations may be useful
that are not morally tolerable because exposure, or at least censure, can be
suppressed. Jose Quiroga, the personal physician of President Salvador
Allende, presented the case of the CIA-engineered military coup in Chile in
1973, which was instigated by President Richard Nixon and his national security
advisor Henry Kissinger. In this case,
as documented by Quiroga, neither U.S. congressional investigation nor presidential
orders have forced revelation of all relevant CIA documents. Even the eventual exposure of morally
intolerable aspects of the operation did not result in significant censure. As shown in Figure 4, the utility line for
such a well-positioned intelligence unit as the CIA may run above the moral
divide.
A
civil litigation attorney, Rafael Chodos, contrasted this variable
vulnerability to disclosure and censure, on the international political scene,
to fairness in judicial intelligence operations under California's discovery
process. By law, both plaintiff and
defendant must fully disclose all evidence that is likely to be admitted into
trial proceedings. When one party
petitions the other for relevant information, the other party must supply it,
if the court agrees to the relevance.
Of course, there are flaws in the process. In the particular case presented, after a two-year delay and
various deceptions by the defendant, Chodos was permitted to examine 37,000 of
the defendant's documents, for only two hours, at a cost of $10,000—which he
nevertheless accomplished to his satisfaction with a hand-held scanner. In California, he legal penalty for
completely refusing to produce relevant information requested by the other
party is a court decision in favor of that party. That is, in order to avoid disclosure the party must
default. In military and political
intelligence operations, only the defeated can be forced to show their
hand.
Individual Role Players in
Intelligence Operations
From
a consequentialist perspective, the points on the moral-dynamics graph must
represent operations, not decisions or actions by individual agents, because
operations coordinate the actions of individuals to produce the
consequence. In intelligence
operations, individuals are frequently uninformed or deceived as to the full
meaning of their actions, or they are goaded by hard circumstance to act
contrary to their intentions. The poor
correspondence among knowledge, intention, and action on the part of
individuals is not remediable because it is essential to the success of
secretive and deceptive operations.
A
medical engineer, Eldon Byrd, reported a case that illustrates this point. After working on the Polaris submarine,
which carried long-range nuclear weapons, Byrd developed nonlethal weapons with
reversible effects. He regarded this as
a humanitarian alternative to "punching holes in people and having their
blood leak out" in battle. His
inventions used magnetic fields at biologically active wave frequencies to
affect brain function. Byrd could put
animals to sleep at a distance and influence their movements. When the success of his research became
evident, suddenly he was pulled off the project and it went
"black." His believes the
electromagnetic resonance weapons he developed have been used for psychological
control of civilians rather than for exigencies in battle. That is, to ensure his participation, he was
uninformed about the true nature of the project. Byrd’s case also illustrates how morally tolerable operations may
transition to morally intolerable operations, or at least rise above the
atrocity line.
Moon,
the OSS agent, spoke more favorably of the moral choices accorded him: "One thing I liked about our government
was we were told if ever we were given an assignment and you don't want to do
it..., you just say, 'I choose not to do that,' and you walk away and you will
not be questioned. It will be given to
someone else. If someone else accepts
the assignment, though, the operation itself may proceed in the same way with
different role players, with no change of location on the moral dynamics graph.
Another
presenter, who can only be identified as a military commissary man, related a
worse moral experience in the "war against drugs." He was repeatedly drawn into irregular
procedures, such as the burial at sea of a boat pilot shot by narcotics
traffickers. When the counternarcotics
unit he supported met armed resistance, he was pressed into hostilities against
noncombatants, including children.
Serious violations of the military rules of engagement led to very bad
morale in his unit: "We wouldn't
even talk to each other for days.... Most of us felt so dirty, like the filth
we thought we were stopping." One
of his fellow agents responded by committing suicide on military premises. But crewmembers who had remained aboard ship
to provide regular logistical support "were almost euphoric" with the
success of the drug interdiction and praise from superiors. Closely coordinated participants in an
operation can have very different moral experiences because of their different
roles and the tight compartmentalization of knowledge.
As
a further complication, individuals can play sincere roles in conflicting
operations. In researching abandoned U.S.
weapons and contaminants in the Canal Zone, Lindsay-Poland depended on covert
assistance from both U.S. and Panamanian officials. A U.S. military intelligence officer passed him critical
information—once—and encouraged him to continue his environmentalist efforts
for the good of the whole system.
Breakdown of the Moral-Dynamics
Graph
As
an interpretative framework for the cases, the moral-dynamics graph breaks down
at extremes. The two axes
are not truly independent (in spite of their perpendicularity on the
graph): the stakes are not independent
of the harms committed in intelligence operations. Large, powerful systems may be able to absorb the harms, such as
citizens' diminished trust of government, without much overall change, but
small systems may collapse, as did the Christian school described by
principal. In the fracas with the
charismatic administrator, the head pastor, and the school board, the
enrollment dropped from 750 to 250 students in one month, which brought financial
collapse.
A
physician at the workshop, Sue Arrigo (my sister), discussed medical
intelligence operations in hospital intensive care units (ICUs). Especially in regard to terminal patients,
hospital administration, residents and nurses, and patients' families have
competing intelligence goals. Her case
of an ICU patient with Adult Respiratory Distress Syndrome, among other
ailments, offers a striking metaphor for small-system interdependence of
operational harms and security risks.
The patient "lost his ability to speak when put on a ventilator [to
help him breathe], lost his clarity of mind when sedated to prevent his gagging
on the ventilator tube in his mouth, lost his ability to move when tied down so
as not to pull out his tubes, and finally lost his sanity as he developed ICU
psychosis."
A second cause of breakdown of the
moral dynamics graph derives from the confound in some intelligence operations
between responding to reality and creating reality. A targeting intelligence officer illustrated
how—in this era of high-speed communications and technologies—intelligence
operations can create the image of risk and damage in which the war is
conceptualized and fought. But on the
moral dynamics graph, the stakes falsely appear to underlie such formative
intelligence operations.
IV.
Highlights
of the Plenary Session
Returning
now to the workshop plenary session, here are sample agreements and
disagreements cited by insiders and outsiders.
As a point of agreement, the legal separation of church and state in
American society tends to separate professional conduct from individual moral
beliefs in all professions. We need to
conceptualize intelligence activities in a way that unites them with fundamental
belief systems so practitioners can make moral decisions in the grip of
“supreme emergency.” Outsiders felt
that insiders, because of the secrecy, compartmentalization, and urgency of
their work, fail to seek guidance about the broader and long-term consequences
of operations, as in the CIA overthrow of Allende's government. Insiders regarded secrecy as a positive
device for societal security.
Outsiders tended to hold intelligence personnel morally responsible for
the unintended consequences of their actions, whereas insiders tended to take moral
responsibility only for intended consequences.
But it was noted that this discrepancy is found between insiders and
outsiders in law enforcement, medicine, science, and other specialties.
At
the plenary session, the participating philosophers-as-ethicists were Charles
Young (chair), Ann Davis, Grant Marler, and Kurt Norlin. (Paul Hurley, Brian Keeley, Dion
Scott-Kakures, and John Vickers participated in earlier sessions.) In critiquing the moderate consequentialist
position for intelligence operations, philosophers called attention to the
slippery-slope phenomenon in overriding constraints: morally tolerable operations easily gain momentum and slide into
morally intolerable operations.
Philosophers pursued the related problems of insufficient time for moral
deliberation and of limited rationality by decision-makers. They proposed that intelligence
practitioners, like everyone else, need strong moral principles that can be
used under pressure, even though this method is imperfect. Marler, who had a prior career in
intelligence, emphasized the special moral injury of depriving personnel, such
as the counternarcotics agent and the medical engineer, of agency by not
involving them in the decision making process.
Their sense of moral violation contrasted with the moral well being of
Moon in OSS operations, whose unit in North Burma, beyond the range of military
supervision, had exercised thoughtful moral autonomy.
As
for a moral theory that could encompass both civic and intelligence ethics, the
philosophers were cautious.
A unified field theory? I don’t know. One needs to know a lot more about the kinds of problems that
come up to see whether there’s a common structure or not. What might strike one as a remote analogy is
the drug use controversy in Olympic athletes. There are things you’re prepared
to do that you probably ought not to do to win an Olympic medal—taking certain
kinds of drugs—and then various sorts of secrecies that attach to that. There are many similarities and probably
many differences between those cases and military intelligence cases. I think the idea of a casebook, in which
lots and lots of cases are compiled, is a great idea. —C. Young (paraphrased).
When you look at different
applications [of moral theory], whether it’s primarily in bioethics, or in
so-called business ethics, or in environmental ethics, or in ethics of
intelligence operations, certain features are more salient and your principles
are going to be more responsible, or your sensibility more responsive, to those
features.... What we want to do is get enough credibility for a theory of
application that people who are concerned with moral theory are paying
attention to it. ‑A. Davis (paraphrased).
On the one hand, I don’t see how a
piecemeal approach can carry much weight with people. Right and wrong are not discipline-specific concepts, and people
intuitively understand that. Any bag of
discipline-specific theories will therefore always be recognized as a mere
patchwork of ideas that betrays our lack of deep understanding of the concepts
involved. On the other hand, I’m
pessimistic about getting widespread acceptance for a unified moral theory,
because accepting a moral theory involves accepting the worldview in which the
theory naturally fits. Our moral theory
is bound to look very different depending on whether we view human beings as
God’s greatest creation, or as biologically evolved organisms, or as souls
traveling an eternal cycle of death and rebirth, or what have you.... So the
Unified Theory is unforeseeably far down the road.—K. Norlin (paraphrased
from later comments).
Finally,
the artists reflected on the workshop.
John Crigler, a guitarist-composer, asserted that the ethical good is
distinct from the aesthetic good, and it damages human beings to limit either
one. We should stop struggling, as we
had earlier in the workshop, with the image of a Nazi death camp commander
enjoying Mozart. James Groleau, a print
maker, questioned whether we had missed a nonpolitical motive for secrecy,
related to personal identity and work process.
He prefers to keep his preparatory sketches private, but gave
participants a glimpse of one workshop sketch, just as they had offered
glimpses of their secrets. Cynthia
Ford, a poet, remarked on the marvel of workshop participants just being there
together: people we believe to be
monsters or absurd are sitting next to us at dinner eating lettuce with us.
V.
Follow-up to the Pilot Workshop on Ethics of Intelligence Operations
As
planned, we are assembling a casebook for intelligence ethics, beginning with
the workshop case histories. Political
sensitivity of case information and personal vulnerability of presenters has
greatly complicated the assembly. The
workshop case histories and commentaries will be supplemented with extensive
moral development interviews of few intelligence professionals. Grant Marler and I conducted a two-hour
interview with Harold William Rood, Keck Professor of Strategic Studies at
Claremont McKenna College. Rood
described Mahatma Gandhi in some detail as a monster of political
manipulations. That interview contrasts
interestingly with my five-hour interview of the Tibetan Buddhist security
official, Tashi Namgyal. He protects
another political-religious authority who teaches nonviolent action, the Dalai
Lama. The moral development interviews
suggest that life experience may more easily explain some discrepancies in
intelligence ethics than moral reasoning from moral principles. A preliminary manuscript of workshop cases
and moral development interviews will soon be available to give intelligence
ethics an “empirical dimension” and bring it into conversation with history and
human development, as Jonathan Glover, ethicist of Twentieth Century
atrocities, has requested [1999, Humanity:
A Moral History of the Twentieth Century. New Haven, CT: Yale
University Press, p. xii].
The Claremont pilot workshop neglected,
for simplicity, many crucial issues that affect the moral dynamics of
intelligence operations. One such issue
is the relationship between evidence and accountability in deceptive operations. At the workshop, an investigative journalist-attorney,
Andrew Basiago, reported on an illustrative case. A client of his said that she had been a CIA courier in Europe
and had been handled through mind-control techniques. She was seeking damages for devastating psychological
consequences. His nearly impossible
task was to find confirmatory or disconfirmatory evidence of the alleged
damages so that moral judgment might even be rendered. Other crucial issues are: the interplay of institutional factors and
moral accountability, the contributions of ordinary social processes (such as
ingroup bias) to moral problems in intelligence, the contributions of
individual psychological processes (such as cognitive behavior under stress),
and the institutional, social, and psychological opportunities for improving
moral practices in intelligence operations.
Future workshops will emphasize these various issues, with assistance of
scholars from pertinent disciplines.
The
pilot-workshop artists have also undertaken to expand the scope of inquiry of
intelligence ethics. In closing I would
like to play for you a passage from an audiotape by Harold William Rood and
John Crigler. This audiotape is one in
a series of artistic vignettes that present moral puzzles in intelligence
operations. Here Rood presents his
argument that the United States should have fought to win in Vietnam, and
Crigler spontaneously follows his argument and comments with the guitar. The “duet,” so to speak, straddles a wide
political chasm. Rood dug up German
land mines in World War II, analyzed Chinese intelligence in the Korean War,
and taught field interrogation for ten years at a military college. Crigler, on
moral grounds, extended himself to avoid the Vietnam War, regretfully breaking
with family tradition. (Crigler’s
father was a surgeon at Omaha Beach, and his great-grandfather rode with
Moseby’s cavalry.) In this passage,
Rood argues that American lack of will to win the war in Vietnam has made war
with China inevitable—nuclear, chemical, and biological war—and on American soil. (If possible, a machine-playable audio-file
of the passage will be inserted later.)
For review of this paper I am
grateful to Charles Young and John Crigler.
Cynthia Ford negotiated with workshop presenters for permission to reveal
their case histories to varying degrees.
Robert Roetzel introduced me to Michael Walzer’s concept of “supreme
emergency.”
Appendix A
The Relevance of Intelligence Ethics
to JSCOPE
The full moral dignity of the armed services
in American society requires coordination among intelligence ethics, military
ethics, and civilian ethics. In the
public mind, the military is morally bound to its affiliates: the CIA, the Atomic Energy Commission, the
weapons industry, and so on. The
military is also inseparable institutionally.
Col. L. Fletcher Prouty, “the Focal Point officer for contacts between
the CIA and the Department of Defense on matters pertaining to the military
support of the Special Operations of that Agency” from 1955 to 1963, has
detailed the penetration of the military by the CIA and the usurpation of
military resources for CIA operations [1973, The
Secret Team, Englewood Cliffs, NJ, p. vii]. Military virtue notwithstanding, for moral
legitimization with the “liberal elite,” I believe the military will have to
answer for CIA covert operations.
Contemporary
multi-national military interventions for humanitarian purposes further create
a demand for conspicuously "clean hands" intelligence. "Dirty hands" intelligence may be
tolerated for the sake of national interests, but only "clean hands"
intelligence supports the moral rationales for peacekeeping operations,
disaster relief, care of refugees, arms inspection and control, war crimes
prosecution, and monitoring of environmental conventions [Eriksson, Pär.
(1997). Intelligence in
peacekeeping operations. International
Journal of Intelligence and Counterintelligence, 10 (1), 1-18.] Explicit ethical standards would also
assist intelligence services in collaboration with idealistic disciplines, such
as anthropology and medicine, and with altruistic nongovernmental
organizations, such as the Red Cross.
Further, intelligence services themselves need recognized ethical
standards for recruitment, morale, and retention of personnel, as emphasized by
former Inspector General of the Central Intelligence Agency Frederick P.
Hitz [(2000). The future of American espionage. International Journal of Intelligence and
Counterintelligence, 13 (1), 1-20.].
Appendix
B: Outline of Workshop Program
Pilot Workshop on
The Ethics of Political and Military
Intelligence
For Insiders and Outsiders
Department of Philosophy
Claremont Graduate University
September 29, 2000
Box Lunch & Participant
Introductions 1:00 — 1:30 p.m.
Workshop Introduction and
Instructions 1:30 — 1:55 p.m.
First Workgroup Session—Insider
Presentations of Cases 2:00 — 3:15 p.m.
Theme I. Moral issues related to high technology
A.
Targeting intelligence for multinational forces
B.
Nonlethal weapons development
Theme II. Moral issues related to covert operations
A.
Military counternarcotics operations in Latin America
B.
World War II OSS operations behind enemy lines
Theme III. Moral issues related to institutional and organizational factors
A.
Interviews of Tibetan refugees by the Department of Security of the
Tibetan Government-in-Exile
B.
U.S. Army military police at a base in Central America
Second Workgroup Session—Outsider
Presentations of Cases 3:45 — 5:00 p.m.
Theme I. Moral issues related to high technology
C.
Termination of life support system to patients in intensive care units
D.
Long-term psychological consequences to intelligence courier subjected
to mind control techniques
Theme II. Moral issues related to covert operations
C.
The 1973 CIA-instigated military coup in Chile
D.
Grassroots government-opposition movements in the Panama Canal Zone
Theme III. Moral issues related to institutional and organizational factors
C.
Civil litigation and the process of discovery under California law
D.
Religious private schools and the Nevada Revised Statute for private
schools
Working Dinner (to develop presentations for
plenary session) 5:15 — 7:15 p.m.
Mixed tables of insiders and
outsiders
Ethicists’ table
Artists’ table
Plenary Session
7:30 — 9:00 p.m.
A. Insiders and outsiders: Principal Differences in Moral Perspectives
— 30 minutes.
B. Ethicists: The Coordination of Intelligence Ethics and
Civic Ethics — 45 minutes.
C. Artists: Reflections on the Workshop — 15 minutes.
Conversation and Musical Respite
9:00 - 10:00 p.m.
Appraisal
of schedule: this schedule was
workable, but would have profited from a longer workshop introduction and
discussion to arrive at a common starting point, two-hour workgroup sessions to
permit satisfactory development of complex cases, and an overnight rest for
contemplation before the plenary session.
But initially we doubted the workshop would run with even lesser time
commitments from participants. As it
turned out, everyone showed up and (with one exception) engaged
enthusiastically, with much interest between insiders and outsiders. Some undercover victims of intelligence
operations—both insiders and outsiders—made exciting discoveries in private
conversations. Insiders and outsiders
had opportunity to air their moral qualms about operations and also their moral
triumphs.