Jean Maria Arrigo, Ph.D.
110 Oxford St.
Irvine, CA 92612
Paper presented to
The Joint Services Conference on Professional Ethics
Washington, DC, January 27-28, 2000
My aim, as a social psychologist, is to establish a foundation for fruitful moral discourse
on weapons research by insiders and outsiders.To this end, weapons research may be
viewed as a joint venture of science and intelligence, as two parallel methods of
inquiry.To assist outsiders in reasoning from the perspective of intelligence, I derive the
principles of inquiry from four premises:(1)the ultimate goal of inquiry is advantage
over an adversary; (2) the adversary is implacable and dangerous; (3) observations and
interpretations are vulnerable to deliberate deception by the adversary, and (4) clients
govern the broad topics, opportunities, and constraints of inquiry.These premises foster
careful analogical moral reasoning between familiar adversarial endeavors such as
workplace drug testing and the concealed endeavors of weapons research.Finally I
address the central moral question:For what moral rules on weapons research are we
willing to lose a battle, a city, a war, the nation?The adversarial principles of inquiry
imply that no moral rules can prevail in weapons research unless the rules support the
methods of inquiry.I propose one such rule, the preparation of “moral impact reports”
for prospective projects.This rule presents the “moral tracking problem”:how can a valid
moral rationale for a weapons research project track the project as it evolves in
response to changing circumstances?
The Ethics of Weapons Research for Insiders and Outsiders
For what moral constraints on weapons research are we willing to lose a battle, a city,
a war, the nation?
I propose this as the central moral question in weapons research.Outsiders cannot ultimately
imposemoral rules on weapons research because they cannot monitor it.Outsiders would have to
breach barriers designed to thwart enemy intelligence agencies and to override the decisions of
people who are willing to sacrifice their lives for national security goals.Insiders cannot
ultimately imposemoral rules on weapons research that contravene their own national security
goals.My purpose here as a social psychologist is to engage your participation in broad social
negotiation to reach agreement.First I explain the particular relevance of weapons research
ethics to JSCOPE, and then I lay out my plan.
JSCOPE and the Ethics of Weapons Research
Histories of war portray the traditional values of courage, loyalty, discipline, and so on as crucial
to the success of armies.Reliance on traditional weapons, however, has jeopardized
success.Former Secretary of Defense William J. Perry articulated innovation in weapons as a
principal defense strategy.Condensing a 1995 report from his office (FitzSimonds, 1995, pp. 3 &
Military leverage derives from an ability to innovate and exploit change faster than the
adversary can adapt to it.Prudence dictates that the military presume the obsolescence
of apparently state-of-the-art systems, operations, and organizations and that it
develop the capacity to adapt very rapidly to profound changes it did not
anticipate.Clearly, peacetime innovations must be successful at the start of the next
war, whether by pretest or some other means.Political constraints on weapons
development, such as international arms limitations and budget decline, not only fail
to inhibit innovation but seem to be a critical factor in driving it, for they offer greater
stimulus to profound innovation [p. 20].
The empirical methods of weapons research generally require testing with human beings.For
example, in 1945 when Manhattan Project scientists became concerned about the exposure of
bomb production workers to plutonium, Oppenheimer authorized a metabolic study of
plutonium on unwitting hospital patients.At least 210,000 active duty servicemen were exposed
to radiation in atmospheric nuclear tests at the Nevada Test Site (Fletcher, 1990).Innovation and
empiricism, which we normally regard as positive, thus drive the moral problems in weapons
research.Character is a secondary matter.
The centrality of weapons research to defense, and the participation of the armed services as
both funders and conductors of research, invites ethical review by JSCOPE.
What I wish to contribute is a framework for moral discourse by insiders and outsiders. I first
offer a new representation of weapons research, as a joint venture of science and
intelligence.Then I formulate the principles of inquiry of intelligence, in comparison with the
principles of inquiry of science.Last, I endeavor to show how attention to these principles can
assist negotiation of ethics in weapon research by insiders and outsiders.
The framework I propose is elaborated in my recent dissertation (Arrigo, 1999).It draws from my
oral history interviews with retired military intelligence professionals, from archived oral
histories of pioneers in radiation medicine, from ecological psychology and philosophy of
science, and from my own background as daughter of an undercover army intelligence officer.
A New Representation of Weapons Research
Weapons research is typically represented as applied science that is either ennobled by military
goals or corrupted, depending on one’s political stance.Instead, I represent weapons research as
a joint venture of science and intelligence, treated in parallel as knowledge-generating
enterprises.Although the ultimate goal of the Manhattan Project was political, the daily work of
intelligence professionals and scientists alike was generation and control of knowledge.(The
Israeli intelligence theorist Issac Ben-Israel has also drawn the analogy between science and
For the purpose of moral discourse, I take Intelligence (capitalized) to refer to the ideology,
methodology, lore, and practices of inquiry for national security goals in the United States.Here
Intelligence does not refer to agencies, which may themselves have incompatible goals, nor to
individual practitioners, who may themselves fill incompatible roles.Similarly,
Science(capitalized) as ideology, methodology, lore, and practices of inquiry must be
distinguished from scientific institutions and individual scientists.The archived oral history
interview of Willard Libby,member of the Atomic Energy Commission (AEC) (1950-1959 and
Nobel laureate in chemistry (1960), illustrates the impossibility of separating individual
practitioners of Science and Intelligence.Here is Libby speaking from an Intelligence perspective,
in my use of the term (Libby, 1978):
I was put on the Atomic Energy Commission by President Eisenhower because of my
helping on the hydrogen bomb decision.There weren't all that many scientists who
were willing to stand up and talk.I didn't talk publicly; I talked privately, very effective
places.And I knew where the buttons were.Power shifts from time to time, and you
have to keep up with it.But I was appointed for that reason.
Here is Libby speaking as a scientist:
We no longer do bacteriological research for warfare ... leaving ourselves open
[vulnerable] to a very, very important method of warfare.Same thing on chemical
warfare. .... Did you ever read about what they did during the Black Death in the
Middle Ages?Witchcraft.We're reduced to that essentially.Now, ignorance is our
enemy, and what we ought to do is research.Find out what this thing is, and how
come we can't control it.
To confound the principles of inquiry with the organizations and individuals who perform the
inquiry creates confusion in moral reasoning about weapons research.In particular, attempts to
resolve moral problems by establishing moral codes for scientists must fail in research, where
scientists often have deeper commitments to national security doctrine than to science ethics. 
A Primer of the Adversarial Epistemology (Theory of Knowledge) of Intelligence
The second stage of my plan is to make the methodology of Intelligence accessible to
outsiders.To this end I formulate a theory of knowledge, or epistemology, of Intelligence.I call
this an “adversarial epistemology,” in contrast to the ideally “cooperative epistemology” of
Science.The adversarial epistemology should enable scientists and philosophers to reason from
the perspective of Intelligence without mastery of political and military history and the arcane
lore of Intelligence, and it should enable them to reason from premises and principles, in a
familiar analytic mode.
For simplicity, the targets of Intelligence will be referred to generically as the Adversary—an
enemy nation, an ally, a rival intelligence faction, a terrorist group, dissident citizens.The public
policy makers, political appointees, and military commanders who are the directors and
consumers of Intelligence will be referred to generically as the Client.Most principles of the
adversarial epistemology of Intelligence can then be derived from these four premises:
1.The ultimate goal of inquiry is advantage over an Adversary, not knowledge per se or
knowledge applied to the general welfare.
2.The Adversary is implacable and dangerous.
3.All observations and interpretations are vulnerable to deception by the Adversary.
4.Clients govern the broad topics, opportunities, and constraints of inquiry.
In this talk I will lay out implications of one premise, the third, which creates the most crucial
methodological differences between Science and Intelligence Implications of the other three
premises appear in my paper on the JSCOPE website.
1.The ultimate goal of inquiry is advantage over an Adversary, not knowledge per se or
knowledge applied to the general welfare.[Implications omitted from JSCOPE 2000
Adversarial epistemologyarises where competition for knowledge is crucial to the attainment of a
limited good, such as military and political power.It is intrinsically partisan, because the value of
knowledge depends on its utility to us.Knowledge from which advantage has faded passes into
domains with cooperative epistemologies, such as science and history.Adversarial epistemology
aims for a temporary, not permanent, stock of knowledge of particulars.Indeed it may be safer to
repeat certain inquiries than to stock knowledge of particulars that may be stolen by the
Adversary or exposed to public censure.The permanence sought by the adversarial epistemology
lies in heuristics, such as the tradecraft of espionage, and in strategy, such as Perry’s program of
continual innovation in weapons.
Advantage determines the value of the knowledge, and this introduces a gap between the
validity of knowledge and the value of knowledge.On occasion, ignorance, error, or deliberate
omission may serve advantage better, as when knowledge might evoke fears or sympathies or
moral obligations that would compel us to act contrary to our advantage.
In the adversarial framework, the firmest criterion for knowledge of a phenomenon is that our
observations and interpretations make sense with respect to the self-interest of all parties
powerful enough to affect the phenomenon or our observations or interpretations of it.
As knowledge seekers in competition with the Adversary, our (a) study of phenomenon X, such
as uranium fission, leads us (b) to study also the state of the Adversary’s knowledge of X and,
further, (c) to study the state of his knowledge of our knowledge of X.
For example, in 1941 both Germany and the United States had nuclear bomb programs—first
level inquiry.Neither country had succeeded, and each sought to discern the progress of the
other—second level inquiry.After the war the British captured ten German atomic scientists and
secretly recorded their conjectures about U.S. atomic bomb development after the attack on
Hiroshima—a third level inquiry.Scientists may also conduct secret intelligence on the progress
of their competitors, but competitive rewards depend on publication of results, so the levels of
secret knowledge collapse periodically.For Intelligence though, the structure of inquiry tends to
complicate itself even after defeat of the Adversary, as others compete for the Intelligence of the
defeated.Up until 1993 at least, the British had not released the original tapes of the German
atomic scientists, only translations of excerpts of the conversations (Cassidy, 1993).
2.The Adversary is implacable and dangerous.The circumstances of competition are believed to
prevent reconciliation of opposing interests, so any concession to the Adversary or lapse in
wariness will be exploited.When stakes are high, the Adversary may attempt to destroy us, not
only to thwart our inquiry. [Implications omitted from JSCOPE 2000 presentation.]
The dangerousness of the Adversary creates a trade-off between accuracy and speed of inquiry,
because self-defense demands timely response.Knowledge is critical preparation for action.In
the words of philosopher of science Charles Peirce (1839-1914) , “the principle upon which we
are willing to act is belief,” and rightly so, because in the short run, science is as likely to guide
us poorly as to guide us well (Skagestad, 1981, p. 206).Proper belief is the point on which
Oppenheimer failed to satisfy the Personnel Security Board at his 1954 hearing (U.S. Atomic
Energy Commission, 1971).Action demands proper character as well as proper belief.The
adversarial epistemology therefore must incorporate standards of belief and character, such as
core values, and methods of assessing belief and character, such as background checks and
counterintelligence.Another measure of belief and character is the willingness to sacrifice
oneself and others, which presents problems of discrimination.Col. Carl Eifler, head of the Office
of Strategic Services, who served behind the Japanese line in Burma, said, according to his
biographer:“I figured we would all be killed.I really wasn’t concerned about whether I was
violating any law” (Moon, p. 310).
Dangerousness directs our attention to prevention of surprises.Unlike scientists, we must forgo
the ideal of perfection of knowledge in limited fields.Instead, we must spread our epistemic
resources widely for brief inquiry into unlikely domains so as not to leave them unattended for
exploitation by the Adversary.Thus dangerousness creates a further trade-off between precision
and comprehensiveness.But ultimately there is no empirical method for anticipating the
unanticipated. We must transcend empiricism, as in responding to the Adversary on the basis of
the most destructive intentions imaginable from his actions.
Further, there is no final accounting of past events.As long as adversaries remain, any
recounting of a past event may offer new opportunities and liabilities.For example, after the war,
some German atomic scientists who wished to direct a nuclear energy program in Germany
argued that their failure to produce the bomb had been due to wartime shortages and to their
moral reservations (Cassidy, 1993).
3. Observations and interpretations are vulnerable to deception by adversaries.
The key difference between adversarial and cooperative epistemologies is the ever present
possibility of deliberate deception.The scientist confronts errors in observation and analysis due
to unrepresentative samples, faulty instrumentation, omitted dated, misused statistical
analyses, and so forth.The same problems of self-generated error confront us, too.Where
possible, we leave these problems to scientists, historians, economists, and other epistemic
subcontractors, so to speak.Our more serious problem is error imposed by the Adversary
through deliberate deception.Does the jiggle of the seismograph needle indicate an earthquake
or a nuclear test conducted in a cavern for seismic disguise?(Van der Vink, 1994).
Regardless of our Adversary’s knowledge of the phenomenon X, he may (a) deceive us about the
nature of X, (b) deceive us about his knowledge of X, or (c) deceive us about his knowledge of our
knowledge of X..
Unlike Science, systematic observation may increase the vulnerability of Intelligence by creating
a mechanism for deception.For example, observation of encoded messages renders the
code-breaker vulnerable to deceptive messages.Indeed, the more we trust a method, a fact, an
expert, an organization, or an ideal, the more attractive it becomes to the Adversary as an
opportunity for deception.Techniques to limit our predictability, such as generation of huge
amounts of meaningless data, actions that sacrifice our manifest interests, and out-of-character
behavior may improve our overall epistemic performance by thwarting deception.
Science can progress in explanation of phenomena through simplifying reductionist approaches,
such as genetic, neurological, or psychological approaches to human behavior.For Intelligence
though, the constant threat of deception undermines simplifying, reductionist
approaches.Suppose, for example, in World War II we were to observe from a high altitude a
convoy of enemy tanks.Closer inspection might reveal we had fallen prey to a perceptual
deception of inflated-rubber tanks disguised as battle-ready tanks (Russell, 1981, p. 198).The
purpose of the perceptual deception would still be at issue.To lure an attack at the site?To
garner military aid from allies?Or suppose we were to observe a convoy of enemy tanks
approaching our border.Close inspection might confirm they were indeed battle-ready tanks.Yet
false diplomacy might convince us that the maneuver was a feint directed against a greater,
common Adversary (Epstein,1986, p. 128).In this case we would have fallen prey to a conceptual
deception.The course of deception may change in response to our inquiries.
Our best analytic strategy for detection of deception is to search for inconsistencies.This focuses
inquiry on anomalies rather than regularities, as in science.Intelligence inquiry is therefore
minutely contextualist and cannot rely on general or abstract laws.This one time, the satellite
warning system may react to the sun’s reflection from cloud tops and falsely signal a ballistic
missile attack—as it did to Soviet warning-system headquarters south of Moscow in 1983
(Hoffman, 1999, p. 16).Science and engineering produced the satellite warning system.Military
intelligence produced the pattern understanding of then-current U.S. aggression that led a
Soviet missile officer to declare a false alert.We seek epistemic subcontractors who have
cooperative epistemologies— scientists, historians, economists, linguists, and others—to portray
the background of natural phenomena against which deceptions can be detected.
Our counterdeceptions of the Adversary likewise require detailed, contextualized portraits of
persons and events in order to design deceptions.To carry a deception forward in time, the
Adversary’s perception of a well defined context must be controlled over a period of time
sufficient for the deception to yield its effect.This demands constant feedback about the
Adversary’s responses to our manipulations.
Defense against deception and perpetration of counterdeception require secrecy.Secrecy shapes
organizations through selection and monitoring of personnel for loyalty, compartmentalization of
inquiries to minimize vulnerability due to negligence and treachery, and so on.As organizational
obstacles to inquiry accumulate, we are apt to compensate by shifting among policies that
permit secrecy.Personnel selection criteria oscillate between educational specialization and
generalization, academic and agency training, official cover (e.g., military attache) and unofficial
cover (e.g., journalist), and so on.Organizational structure oscillates between flatness and
steepness in hierarchy, centralization and decentralization, collaborative and competitive
relations between divisions, and so on.
The flows of information permitted by the organization further shape our representations and
possible solutions to problems.For example, studying foreign intelligence according to
geographic regions (Mideast, Latin America, etc.) may conceal patterns of transnational terrorist
activities.Thus secrecy presents us with an epistemological puzzle from the field of artificial
intelligence:what is the relationship between the structure of our organization and our capacity
to solve a problem within the organization?
Secrecy also presents us with a moral puzzle.In a recent guest lecture at the U.S. Air Force
Academy, the Honorable John T. Noonan, Jr., stated that “evil is recognized because it violates
an interior, invisible law of our being,” with the implication that a person of good character will
refrain from evil.Even if one were to accept Judge Noonan’s optimistic view, the deceptiveness of
the Adversary and the organizational consequences of secrecy pose the questions:How much of
the pattern of events has to be available for moral apprehension?And can an Intelligence
organization be structured for effective moral response by participants who hold various roles?
4. Clients govern the broad topics, opportunities, and constraints of inquiry.[Implications
omitted from JSCOPE 2000 presentation.]
Intelligence seeks to provide accurate and timely information about the military and economic
strength of the Adversary, the intentions of the Adversary, and, most difficult, the effects of
Clients’ earlier decisions (Mandel, 1977).Ideally, Clients use Intelligence in critical decisions and
are able to execute their decisions more effectively.Clients therefore seek narrative rationales of
cause and effect that will suggest solutions to problems and justify decisions to constituencies
(Gates,1989).Intelligence, in contrast, seeks knowledge of patterns (Lincoln & Guba, 1984) with
which to apprehend situations, to recognize deceptions, and to stage deceptions.From pattern
understanding of how things work, Intelligence draws the elements that might contribute to the
Client’s rationale for policy goals or their means of implementation.Intelligence thereby
transforms its own knowledge into information as a product for the Client.
The Client-researcher relationship emphasizes the difference between the utility and the validity
of knowledge.In the extreme case, the gap between the two may result in strategic ignorance:the
Adversary cannot learn from us what we do not know; we are not obliged to take into account
knowledge that would hopelessly complicate decision making; and critics cannot hold our
Clients accountable for unforseen consequences.
Clients rarely understand how the actions of the Adversary determine our methods of
inquiry.Sometimes our Clients direct or forbid our inquiries in ways that do not serve their true
goals.Programs of espionage, for example, cannot be stopped and restarted according to the
convenience of foreign diplomacy because dismissing agents invites betrayals, and
reestablishing a network of agents requires many years.In such cases we may risk disobeying
the short-sighted directions of our Clients in order to achieve their long-range goals.Of course,
this leaves us vulnerable to censure by those whom we serve.
In rapidly fluctuating situations, where the pertinent knowledge is very local and transient, our
readiness and expertise may render us the only agents who can conduct successful secret
maneuvers towards an urgent goal.Therefore our epistemic expertise may drive us into covert
This concludes my primer of the adversarial epistemology of Intelligence.I greatly welcome
corrections from JSCOPE members.
The Intractable Moral Problems of Weapons Research
The widely shared premises of Intelligence and the resulting principles of inquiry generate
intractable moral problems in weapons research.Human rights problems are inherent because
Intelligence is first committed to preserving a social system and a territory against the Adversary,
and this mission, in the moral understanding of Intelligence, rightly sacrifices individuals.The
necessity for continual innovation in weapons leads Intelligence to commandeer Science for
weapons research.Weapons development and testing is a hazardous process that uses up
people, places, resources, and social trust.Competition with the Adversary in methods of
destruction ratchets up the efficacy of weapons and ratchets down norms of acceptable social
Organizational constraints wrought by secrecy, such as compartmentalization of information and
strict hierarchy, obscure the moral implications of secret research to participants and thwart
moral review. Secrecy further hinders the social evolution of meaning, which might transform
victimization in secret research to honorable sacrifice, with care and compensation for survivors
and their families, just as for soldiers disabled in combat.In contrast, combat veterans with
post-traumatic stress, once discredited as weaklings, have attained some dignity from such a
societal transformation of meaning.
Because of the uncertain connections between means and ends, utilitarian moral rationales for
risks and injuries in weapons research are not ultimately satisfactory.Dangerous circumstances
demand swift decisions.Political conditions change unpredictably. Government makes and
executes policies that may disregard the Information produced by Intelligence.And the course of
history may undermine the value of the goals for which participants were sacrificed.
Applications of the Adversarial Epistemology to the Ethics of Weapons Research
At the last stage of my presentation plan, I now apply the adversarial epistemology to the central
moral question in weapons research:
For what moral constraints on topics and methods of weapons research are we
willing to lose a battle, a city, a war, the nation?
The adversarial epistemology can make two contributions to the answer.First, it can foster moral
consensus on weapons research through negotiations by insiders and outsiders—And here I
suggest that many military professionals count as outsiders because their positions do not
entaila “need to know.”—Second, the adversarial epistemology can guide us toward effective
moral guidelines in weapons research.
Moral negotiations by insiders and outsiders.The adversarial epistemology applies in large
measure to many endeavors not directed toward national security.Some examples are industrial
espionage, jury trial, police interrogation, embezzlement, tax evasion,welfare and Medical fraud,
insurance underwriting and claims adjudication, workplace drug testing and computer
monitoring, personnel selection and educational testing.The premises of the adversarial
epistemology focus attention on the common tasks of inquiry and deception, and they screen
out the specific social functions, ideologies, institutional settings, historical developments, and
forces of corruption with which we commonly identify these various endeavors.The analogies,
although imperfect, are important in identifying relevant moral experience of outsiders.
Here is an example of an unsuitable analogy from the trial statement of Susan Crane, a school
teacher who disarmed a Trident D-5 missile in 1996 to commemorate the 50th anniversary of
the bombing of Hiroshima:“Each day thousands of children die around the world from
hunger-related diseases.And still we build Trident missiles.These missiles are cared for in
air-conditioned or heated rooms, never neglected, never homeless.We take better are of these
weapons than our own children” (Trident disarmed, 1996).Although Crane’s act of resistance
may be praiseworthy, child care does not meet the premises of the adversarial epistemology, so
her analogy would be excluded from insider-outsider moral negotiations.
Informed by the adversarial epistemology, both outsiders and insiders can make sharper moral
assessments in weapons research.For example, in the 1977 Senate investigation of the CIA’s
infamous Project MKULTRA on behavioral modification,Director of Central Intelligence Stansfield
Turner testified:“We are focusing on events that happened over 12 or as long as 25 years ago.It
should be emphasized that the program that are of greatest concern have stopped” (U.S. Senate,
1977, p. 1).Principles of the adversarial epistemology would enable the critic to reckon
sympathetically the costs to Intelligence of stopping such programs.The principles would also
lead the critic to ask Turner for an account of process:On what Intelligence rationales did the
programs stop 12 to 25 years ago?By what organizational mechanisms did they stop?What
programs replaced their functions?How was secrecy imposed on debilitated experimental
Effective moral guidelines for weapons research.Now I address the question of what moral rules
can prevail in weapons research regardless of threat.The adversarial epistemology clearly
No moral rules can prevail in weapons research unless the rules support the methods
For otherwise, as the history of weapons research demonstrates, national security doctrine
easily provides a utilitarian moral rationale for violating the rule.The adversarial epistemology
though may help us to devise moral guidelines that support, rather than thwart, the principles
As weapons research projects evolve, certain patterns of moral problems arise as consequences
of the principles of inquiry.The moral problems that accrue, the settings in which they appear,
the institutional roles of affected personnel, and the moral challenges of critics and demands for
compensation for the survivors can be anticipated in a general way.In political affairs, moral
dignity (i. e., social honor in regard to moral standing) (Margalit, 1996)may be calculated as a
nearly nonrenewable resource of great utility—internationally, domestically, organizationally,
and individually.The strategy of concealing possibly justifiable but unsavory projects has had
only partial success (“Atomic Secrets,” 1994).Concealment tactics may slow erosion of an
organization’s reputation but do not restore lost moral dignity.In the early 1950s, scientists
considered it dignified to work on contract with the CIA (Greenberg, 1977).In the late 1970s the
Director of Central Intelligence, Stansfield Turner, stated he must protect the reputations of
scientists by concealing their affiliations with the CIA (U.S. Senate, 1977).In 1995 the
President’s Advisory Committee on Human Radiation Experiments concluded its Final Report
with a two-page excoriation of the CIA for withholding documents about radiation studies
reportedly conducted under Project MKULTRA..For otherwise, the Committee said, “it will be
impossible to put to rest distrust with the conduct of government” (Advisory Committee, 1995a,
p. 839).Of course, the CIA could not reveal such documents without overriding the second
premise, that the Adversary is implacable and dangerous.Tainted projects from the past thread
into a networks of diverse projects in the present, and to pull one thread may jeopardize many.
It should simply be good epistemic practice in weapons research to examine the possible
consequences of projects and to reckon the long-term costs of loss of moral legitimacy with
Clients.Preparation of such a “moral impact report” would highlight what I will call the “tracking
problem in ethics”:the original moral rationale for a project may not apply to the project as it
later evolves.The Manhattan Project plutonium experiment (Welsome, 1993) illustrates the
tracking problem.Development of the atom bomb naturally led to production of bombs and then
to a metabolic study of plutonium in human subjects, in an effort to protect bomb production
workers from exposure to plutonium.To this end, Albert Stevens, who had a diagnosis of
terminal stomach cancer, was injected with a high dose of plutonium at the University of
California Hospital in San Francisco in May 1945.Misdiagnosed, he survived for nearly 21 years,
in weakness and great pain due to his bone absorption of plutonium.For many years a Berkeley
laboratory regularly took urine and stool specimens from him at his home.Argonne National
Laboratory confiscated Stevens’ cremated remains from a funeral home in 1975 to assay the
plutonium that remained in his bones.
By the original moral rationale, the gain in knowledge outweighed the harm of deceiving Stevens
and the other 17 terminal patients, for the patients were expected to die long before the
plutonium had deleterious effects.But this moral rationale did not apply to later study of the
seven long-term survivors (12 to 44 years), who became even more valuable for the metabolic
study as time went on.A different moral rationale, justifying long-term debilitation of subjects
and impoverishment of families, would have been required, especially as the study broughta
broader range of health professionals and other confederates into the deception and coverup
schemes (Welsome,1993).A moral impact report for the plutonium experiment prospectus would
have recognized that terminal medical diagnoses have a regular rate of error, and, following that
path, could have anticipated the moral problems that arose with evolution of the project and the
possibility of an exposé.An alternative course, promising moral dignity in the future, would have
been to seek volunteers for the experiment among elderly personnel of the Manhattan Project,
in a period when many people willingly made much greater sacrifices for their country.
In summary, my object is to engage your interest in the ethics of weapons research and to lay a
ground for fruitful moral discourse by insiders and outsiders.I argue that innovation and
empiricism, which we normally regard as desirable, drive the moral problems in weapons
research, and that character is a secondary matter.My primer of the adversarial epistemology of
Intelligence aims to help outsiders reason about weapons research from the perspective of
Intelligence.The primer devolves from four premises:
1.The ultimate goal of inquiry is advantage over an Adversary, not knowledge per se or
knowledge applied to the general welfare.
2.The Adversary is implacable and dangerous.
3.Observations and interpretations are vulnerable to deception by the Adversary.
4.Clients govern the broad topics, opportunities, and constraints of inquiry.
The adversarial epistemology establishes an analogy between familiar endeavors, such as
workplace drug testing, and the concealed endeavors of weapons research to facilitate sound
moral reasoning by analogy.Ethicist Alasdair MacIntyre (1984) warned that if a social
phenomenon, such as weapons research, is represented as entirely unique, with no analogues
accessible to outsiders, then no moral views can be brought to bear except those of insiders.
Finally, I address the question:For what moral rules on weapons research are we willing to lose a
battle, a city, a war, the nation?The adversarial epistemology implies that no moral rules can
prevail in weapons research unless the rules support the methods of inquiry.I propose one such
rule, the preparation of “moral impact reports” for prospective projects, and discuss the “moral
tracking problem,” when a valid moral rationales for a project does not apply to or “track” the
project as it naturally evolves under the adversarial epistemology.
In closing, I ask your forbearance with my attempt to formulate methodology Intelligence for
outsiders, and I seek your assistance with the Primer of the Adversarial Epistemology.
Advisory Committee on Human Radiation Experiments.(1995a, October).Executive summary and
guide to Final Report:Advisory Committee on Human Radiation Experiments.Washington,
DC:U.S. Government Printing Office. [Stock No. 061-000-00849-7].
________.(1995b, October).Final report:Advisory Committee on Human Radiation
Experiments.Washington, DC:U.S. Government Printing Office. [Stock No. 061-000-00849-9].
Arrigo, J. M.(1999).Sins and salvations in clandestine scientific research:A social psychological
and epistemological inquiry.Unpublished doctoral dissertation, Claremont Graduate
Atomic secrets.(1994, May 1).New York Times Magazine, p. 25.
Cassidy, D. C.(1993, February).Germany and the bomb:New evidence.Scientific American, 120.
FitzSimonds, J. R.(1995).The revolution in military affairs:Challenges for defense
intelligence.Washington, DC:Consortium for the Study of Intelligence.
Fletcher, W. A.(1990).Atomic bomb testing and the Warner Amendment:A violation of the
separation of powers.Washington Law Review, 65, 285-291.
Gates, R.(1989).Analysis—Discussion.In R. Godson,Intelligence requirements for the
1990s:Collection, analysis, counterintelligence, and covert action (pp. 111-119).Lexington,
Greenberg, P.(1977, December).Uncle Sam had you.American Psychological Association
Monitor,pp. 1, 10-11.
Libby, W.(1983).Nobel Laureate.Oral history interview conducted by Mary Terrall in 1978,
University of California, Los Angeles.
Lincoln, Y. S., & Guba, E. G.(1984).Naturalistic inquiry.Newbury Park, CA:Sage.
MacIntyre, A. (1984/1981).After virtue:A study in moral theory.Notre Dame, IN:University of
Notre Dame Press.
Mandel, Rt.(1987).Distortions in the intelligence decision-making process.In S. J.
Cimbala,Intelligence and intelligence policy in a democratic society (pp. 69-84).Dobbs Ferry,
Margalit, A.(1996).Thedecent society (Naomi Goldblum, Trans.).Cambridge, MA:Harvard
Moon, T.(1991).This grim and savage game:OSS and the beginning of U.S. covert operations in
World War II.Los Angeles, CA:Burning Gate Press.
Noonan, J. T.(1999, April).Three moral certainties.The Alice McDermott Memorial Lecture in
Applied Ethics (No. 8), presented to the United States Air Force Academy, Air Force Academy,
CO.[Available from USAFA, Air Force Academy, CO 80840.]
Skagestad, P.(1981).The road of inquiry:Charles Peirce’s pragmatic realism.NY:Columbia
Trident disarmed—Jury not.(1996, Spring).Healing Global Wounds.P. 9.
U. S. Atomic Energy Commission. (1971).In the matter of J. Robert Oppenheimer:Transcript of
hearing before personnel security board and texts of principal documents and
letters.Cambridge, MA:MIT Press.
U.S. Senate, Select Committee on Intelligence and Subcommittee on Health and Scientific
Research of the Committee on Human Resources.(1977).Project MKULTRA: the CIA's
program of research in behavioral modification.Washington, DC:U.S. Government Printing
Welsome, E.(1993)The plutonium experiment.Albuquerque, NM:The Albuquerque Tribune.(A
special reprint of a three-day report published November 15-17, 1993 by The Albuquerque