The Pomo O
By Maj Mark S. Swiatek
The Joint Services Conference on Professional Ethics
Imagine the postmodern officer.
If the term ‘postmodern’ brings to mind qualities like cynical, irreverent, nihilistic, and radical, the image may be difficult to conceive. We expect officers to be loyal, dutiful, and to exemplify standards. The characteristics of the one do not gel with those of the other. They are deeply opposed.
If you further associate ‘postmodern’ with a broader development (not one idea but many, and beware: the term begins to lose precision at this point), stemming from the arts and architecture and spreading for the past 40 years throughout the humanities and social sciences, you would likely accept as valid the notion that postmodern ideology has influenced popular culture. This becomes problematic (as the spread of ideas often causes problems) in so far as our recruiting pool, our society, has assimilated certain postmodern ideals. You may then suspect the postmodern officer lurking about already; in which case your image becomes quite clear. Take for example this excerpt from an essay subtitled “Leading Soldiers in Moral Mayhem” which appears under the heading Postmodern Moral “Standards”:
Many, perhaps most, of the young people entering officer accession programs today are, at heart, ethical egoists whose main value is reflected in their mirror. If that’s true, then many of the company grade officers in your command right now are primarily interested in themselves… their ethical frame of reference is decidedly different from that which is at the heart of the traditional military ethic.
Epic claims beg our attention. In this case, the sweeping formulation of postmodern ideas as (a) running contrary to a military ethos, while at the same time (b) pervading our larger society, should ultimately be judged as failing to inform the professionals and public we serve. Consider the effects of such a claim on both the philosophical and practical levels: first, when military ethicists adopt a stance of outright condemnation, or worse, indifference towards postmodern thought, the resulting absence of philosophical debate encourages, perpetuates and self-fulfills the perception of a military detached from its society. We then form two distinct camps, based upon the circumstances one ascribes to (a), and debate whether the military is anachronistic and should adapt to societal changes:
(T)he conflict in values creates a revolutionary environment wherein the military directly challenges its civilian authority to “protect” society from itself. Already the seeds are sown for creating a warrior class that is determined to resist change, evolution, and adaptation in favor of staunch adherence to traditional values and familiar perspectives of reality.
Or whether the military is the moral exemplar and should reject societal changes:
(A)s American society becomes more individualistic, more self-absorbed, more whiney, in a sense, more of a crybaby nation, as I am bound to say on occasion, it becomes doubly important that the gap between the military and society remain substantial.
Both sides tend toward extreme pronouncements and, by virtue of our patriotism, we are almost compelled to take a stance. In the meantime, wars happen. The actual state of affairs in and around the military transforms to meet current combatant requirements, and the transformations have ripple effects. One recent consequence: the traditional military-civilian gap framework—upon which our original formulation essentially hangs and which we use to support the very notion of a distinct military ethic—has lost some of its tacit acceptance and has come under renewed scrutiny. We’re interested in the matter of how civilian employees, contractors, guardsmen and reservists hold positions of responsibility everywhere throughout the services, from the academies to Abu Gharib. We realize civilianization has impacted the military ethic in ways not readily enumerated. For example, a military member whose civilian spouse works at the exchange or commissary stays home while the civilian spouse deploys. Wherein lies that gap, and how do we talk about it? Even if we preserve the gap exclusively for uniformed members, what are we preserving? Gap-talk may be inspirational to recruits, as (in many professions) reinforcement of the choice for purposeful work and sacrifice, but what purpose does it serve once the choice has been made? We wouldn’t accept military professionals pointing to societal decay or a moral break with the public to explain their subordinates’ shortcomings, crimes, or the failure of a mission. True professionals wouldn’t need such excuses, precisely because their “main value is reflected in their mirror.” And rightly so. Yet we promulgate the notion of a distinct military ethic and the gap it opens by dismissing the postmodern designs (such as outsourcing) that are effectively closing it. A postmodern, on the other hand, might question whether the gap ever existed in the first place.
The second consequence of a narrow approach to the postmodern takes place at the military practitioners’ level, where men and women live out the application of our work. Moral standards matter most here, in the context of human costs, and a dogged stance against that which we decry as having already transpired makes little sense. For if postmodern ideas date back roughly three military generations and we are already dealing with their influence on our ranks, are we to conclude that military leaders are haplessly talking in black and white to subordinates brought up knowing only shades of gray? Exasperation with fledgling ranks is hardly a new phenomenon, but the specific charge against postmodern recruits is subtler; having less to do with an obvious lack of experience and more to do with a mindset or worldview. We wouldn’t expect our officers and NCOs to motivate troops by initiating a debate like the one (that may or may not be) occurring here, but we do expect them to exemplify and enforce standards, and most importantly, to instill in their subordinates the practice of sound judgment. Their success or failure will be determined when the subordinates act on their own in the absence of direct supervision and clear guidance; in short, when they demonstrate they can navigate the gray areas. A decade ago, the Air Force recognized the need for such men and women to operate along the blurring line between combatant and civilian, to make split-second decisions between preemptive defense and appalling mistakes:
"Ravens are a long way from the gate shack," Harper said. "There's no red lines out on the tarmac (which designate restricted areas) only gray lines. Nothing is black and white when you're overseas. Most of the time you can't tell who's who, nobody wears a uniform. You've got guys walking around the airport with AK-47s slung over their shoulders and pistols falling out of people's pockets. So we've got to use our No. 1 weapon our minds."
One difference between a skilled worker and a professional lies in the intangible task of deciding when and how a skill is best employed. We call the military a profession and we know our judgments always carry consequences. Speaking gray from an early age should therefore not be necessarily construed as a shortcoming; our failure to shape and build upon that capacity would.
It should come as no surprise; any talk about the possibility of the postmodern officer will turn back on itself and turn into a discussion about ourselves as leaders and educators. We might begin with a consideration of whether and how epic claims, left packed and unscrutinized, serve as shortcuts for determination and judgment, whether and how they constitute potent forms of intellectual influence, and more appropriately, whether and how we allow them to do these things. These are significant matters for military ethicists, given our unique focus and the inherent responsibilities of those we serve, but they are not unique to us. Similar debates have been underway on American campuses for decades.
Typically, American scholars have been attracted to postmodern critical “moves” but wary of complaints about the academic quality of work labeled as postmodern. It probably doesn’t help that public exposure to ‘postmodernism’ is often defined through media coverage of extreme positions, as in Baudrillard’s provocations during the Gulf War and Sokal’s demagoguery. Cruising under the radar, steady work in fields ranging from physics to anthropology looks very different from the more assertive and, to some, disturbing spectacles propagating the arts, humanities and media. Yet it all falls under one label and, particularly for military ethicists, assessments of the postmodern become constrained by the political rhetoric of the day. Thus perhaps the difficulty with imagining the postmodern officer as anything but the vain and self-absorbed careerist or the derelict box-checker we typically denounce as a poor officers.
Should it concern us as military ethicists that those characterizations were used to describe, respectively, the military service of a decorated veteran/presidential candidate and our Commander in Chief? Or should we write them off as political expediencies, reflecting no underlying sentiments, having little impact, ultimately revealing nothing about society’s perception of the military, or our perception of ourselves—the value reflected in our mirror?
In other words, who is defining the gap? And who is better-served by it?
The question—whether postmodern and military ethics are necessarily at odds—already assumes they are; or (best case) that they may coexist, but only in grayish, diluted forms leaving one or both unrecognizable. A slightly more productive effort would drop the trap of binary opposition and look to those aspects of the postmodern we might wish to sustain and subsume for our own use. As we seem to agree, the ideals have been adapted, at times insipidly, without due consideration of the underlying ethical framework or the human cost. The proposed approach addresses the loss with something of a generative project: a culling of useful values from the embodied knowledge of lived experience (arguably, the traditional work of an ethicist).
The remainder of this paper offers the opening of an attempt, a philosophical sketch of one possibility for the postmodern officer.
Military Sustaining the Postmodern Ethic
Misgivings about motivation, value, and academic quality stem from the body of criticism that views postmodernism as a negative philosophy producing potentially destructive results. Indeed, each of the major, late-20th century philosophers associated with postmodernism appear to have a particular area marked off for demolition. The fact that these philosophers were and are actively engaged in debate among themselves and, as one would expect, often at odds, doesn’t resonate as strongly as their combined effect. Their dissolution of traditional claims in epistemology (meaning, truth) and ontology (God, self, gender) come off as bald attempts to challenge doctrine on every front; from scientific fact to ethical entailments. A common refrain among critics points to the diminished, hopelessly relative morality of a postmodern world. Alternately, it becomes permissible to weigh down the base judgments of daily life with some greater (and sometimes questionable) socio-political significance.
As one would expect, reactions to the perceived onslaught have been sharp. A well-known retort belongs to physicist Alan Sokal, who wrote a parody paper full of postmodern jargon and submitted the paper to a journal, where it was accepted for publication. At the time of its publication, Sokal wrote a separate letter to another journal revealing the hoax. By way of justifying his actions, Sokal proclaimed, “the problem with such doctrines is that they are false (when not simply meaningless). There is a real world; its properties are not merely social constructions; facts and evidence do matter.” To bolster his assertion, Sokal offered, on various occasions: “Anyone who believes that the laws of physics are mere social conventions is invited to try transgressing those conventions from the windows of my apartment (I live on the twenty-first floor).”
That no one took him up on his offer is certainly a validation of common sense, but it’s doubtful whether Sokal found statements against common sense or the existence of reality or, for that matter, against the importance of facts and evidence among the 225 works cited in his original parody paper. Postmodern philosophers do not dispute the external world as existing independently of the human race. At stake for the postmodern philosopher are questions about the properties and laws we ascribe to the world, the extent to which we refer to reliability when our accounts constantly evolve, and the level or amount of imperfection and uncertainty we may ultimately tolerate in making decisions. In short, at stake for postmodernism are the premises and their context, not the conclusions, because the latter are already written into and limited by the former. As Stanley Fish wrote in response to Sokal:
Distinguishing fact from fiction is surely the business of science, but the means of doing so are not perspicuous in nature—for if they were, there would be no work to be done. Consequently, the history of science is a record of controversies about what counts as evidence and how facts are to be established.
Where context and criteria come under scrutiny, we often define the work as a form of skepticism. As philosophers we’re familiar with playing the skeptic’s role, following Descartes, adapting the techniques of Socrates and the Academics and tinkering with Hume’s precision to suit our needs. Perhaps because we’re historically closer to Descartes and the moderns and react directly to (and under) their influence, the postmodern has settled on the older model of skepticism for inspiration: the Socratic admission of one’s shortcomings as a more authentic gesture than the Cartesian manipulation of doubt. Along such a distinction, philosophical lines (and interestingly enough, biographical sketches) are immediately drawn: aporia, possibility, teeming discourse of the streets and a death sentence for antagonizing the state; versus certainty, universality, quiet reflection through the late morning, and self-imposed exile for fear of antagonizing the Church. We are told both Socrates and Descartes had military careers: one was a renowned soldier, a warrior; the other, a member of the Duke’s staff.
Postmodernism-as-skepticism suits our purposes here, but the articulation requires a few caveats to flesh out the project underway within the philosophical tradition. We might think of these as the postmodern’s version of philosophical lessons learned:
1. Methodological skepticism means asking questions to some preconceived end; the postmodern wants to know if philosophy is possible within a context of doubt. The history of ideas could be written as an ongoing conversation between wariness and belief, theory and doubt. Call it a dialectical exchange. The postmodern adaptation of Greek skepticism may be understood as an attempt to open the dialectic; a broader, outward step aimed at providing a basis for the interrogation of an entire period—the modern—and its concomitant ideas—progress, emancipation, reason. To many critics, that’s an unfair move, unless you’re Kant. But where Kant, following tradition, strategically employed doubt and ultimately subsumed it, the postmodern thinks more along the lines of unleashing.
2. Behind every question lies an affirmation or presupposition, some form of understanding that allows the question to be posed. What is at stake? The postmodern challenges characterizations of nature as uniform and continuous where they are generalized (via analogy and inductive reasoning) to promote foundational claims about the uniformity and continuity of human beings. The challenge isn’t new; previous attempts at defining precise, universal grounds have exposed the inherent difficulties in such projects. The postmodern perspective takes that experience, the aggregate of those difficulties, and provides a critique whose first move (not always explicitly stated) may be more accurately described as a question about the question; an examination of the framework, background, history and motive for asking. The question always reveals these underlying beliefs, and to varying degrees, their inherent difficulties. Some postmodern scholars would make the further claim that all questions have a method or purpose, leaving any number of fields, including private life, open to critique.
3. A question may be posed without acknowledging its context. The postmodern makes a pointedly ethical observation: even while we acknowledge the difficulties with precision and certainty, we’re inclined to proceed under a guise of precise and certain direction, guidance and leadership. We do so most perilously when we explore, then justify and act upon general claims about human beings. But what other method do we have? Given that human beings at all levels of society ostensibly operate under the notion that their accounts of reality are accurate, and given the inevitability of differing accounts, there comes a time when persuasion, compromise, suspension of judgment or some combination of these must each in its turn play a role in the conduct of daily living. The postmodern argues that the context for deciding when and how we allow ourselves to be persuaded, where we are willing to compromise, and what we treat indifferently are more muddled than we let on. Background noise and interminable regressions are so prevalent that we have, of sheer necessity, reduced context to its bare essentials; to what must be addressed and analyzed. Concern over what we’re allowing to slip through, unnoticed and unchallenged, motivates postmodern interests in technological advances (especially regarding the media) and the status of both personal communication and public interaction.
4. The skeptic is not privileged. The force of skepticism lies in its negativity: not the trivial fact that it challenges belief without offering an alternative, but as Hume demonstrated, the formal statement of the potential for belief and actuality to differ and not be taken as contrary. We work out the degrees of difference when and where we choose, often mitigating contradictions by adjusting our beliefs and our logic. The fact that such work must be done reflects a foundational flaw in our logic, that defining, human flaw, whereby we already assume (following Hume’s classic example) the uniformity and continuity of nature in posing the very questions about whether and how nature is uniform and continuous. Human beings are all adherents to the idea that our experience serves as the best guide and indicator of what lies ahead, even though we can’t justify this belief, this premise, without presuming it. The postmodern perspective expands the notion to the working levels of society and human interaction, maintaining that our reasoning will always be circular, self-referential, and therefore flawed when variously measured against social standards, principles and laws, or the expectations of another. Hume’s original qualifications are analogous here; uniformity and predictability outweigh irregularities and anomalies, otherwise we wouldn’t be living in a recognizable society. But the stakes are higher. Irregularities in society translate to disenfranchisement, prejudice, crime and injustice. They are inevitable, regardless of which particular gender, race or religious group sets the agenda. Thus the postmodern scholar could offer a critique from the perspective of any marginalized group, but would not promote that particular group’s agenda. Of course, many writers who think they like postmodernism do both; employing the theoretical moves they’ve picked up in a literary criticism class in an expressly methodical fashion.
5. The act of questioning, as a method of engaging the world, has practical, social, and political limits. The postmodern perspective on the world may appear bleak. But this is who we are. We successively and successfully reshape the world to make it more functional; to make life easier, more pleasant, or deadlier. In doing so, we operate on good faith and predictability. Reflexively, we pull back from or disregard more penetrating questions in order to get on with our daily lives. As Wittgenstein wrote, “the real discovery is the one that makes me capable of stopping doing philosophy when I want to.” Of course, he wasn’t writing for the benefit of the non-philosopher. Human beings don’t need to be reminded of the points at which we pull back to move on—that is, where one stops asking questions and starts assuming or taking for granted. The turning away points are instinctual, embodied, habitual, socially inherited and, following this slippery slope, at some point, politically sanctioned. If turning away was not variously agreed upon or imposed, we would have no practical basis for trust or assumption. We would cease to be Aristotle’s social creatures because there would be no purpose to moving on. But there is. And we do.
Throughout our tradition, skepticism counterbalances our innate dependence on belief and legitimizes our decisions over where to place trust. Postmodernism tests the limits of that service by requiring philosophy to proceed from the foundational flaws of our logic without presuming those flaws, our humanity, can or will be overcome. Still, even as postmodernism appears to resist the urge to make aporia reducible for mass consumption, i.e., to offer a positive course, it can’t escape having to make its own decisions about how to engage the world and how to go on. Nor can it fully escape criticism similar to that levied against Socrates; e.g., as setting the bar too high for standards of knowledge, or to paraphrase Ayer, as having impeccable logic and winning an empty victory. To these, the postmodern responds by observing how acquiescence to our limitations (and the humility accompanying such an acknowledgment) is inherent to both the project of philosophy and to the conduct of society.
That, of course, is not a direct response. The disconnect, frustrating for both sides, comes from talking about the same thing, a theory or set of principles, from different levels. Traditionally, we test theory by looking to its accuracy for predicting actions and results, for explaining present conditions, and for facilitating analysis; in other words, by applying the principles to practical situations, case studies, or experimentation. The postmodern test focuses less on the applicability of a theory and more on its superordinate status and consistency; an examination of the logic and motivation upon which the principles rely and how that status is both drawn from and reflected in practical application. In other words, a check, basically, of what lies above (meta-) and below (practice) the principle, and how those levels inform and interact with the theory. Derrida, for example, has dogmatically held and attempted to show how (in practice) every decision we make occurs in a context of instability and insatiability, arguing (from meta-linguistics) that a context is never completely filled. The point is not to preordain the instability of a too-high (theoretical) standard, but rather to acknowledge the dull, inevitable insatiability of daily life, caused by interminable factors, forces, and our own assumptions, all of which relate, in varying ways, to the process of deciding. Thus a postmodern view inculcates the value of interminability, citing rich contexts as necessary to begin any formation of claims about responsibility, judgment and agency. Otherwise, the decisions we make would amount to little more that the following of a script. Or, hypocritically, the rationalization for not following the script to which we hold others. And the script, essentially, was the image Clausewitz chose to differentiate the military commander from other professionals:
(The architect) selects the data with care, then submits them to a mental process not of his own invention, of whose logic he is not at the moment fully conscious, but which he applies for the most part mechanically. It is never like that in war. Continual change and the need to respond to it compels the commander to carry the whole intellectual apparatus of his knowledge within him.
As military philosophers, we focus on the ethical aspect of that apparatus, working to help our cadets, midshipmen and officers learn to carry the core values of their service within them. We use the term ‘internalize’ to name that process. It lies at the core of both ethical and educational theories of development, though it remains something of a mystery. From Plato on it’s been debated. No final determination of how we internalize an ethical code has been fully demonstrable, nor may it ever be, and there’s nothing to indicate the debate won’t go on for another two thousand years. So we settle, for now, for the sake of going on, with more probable explanations, revisited and reworked as necessary in the wake of outrage and scandal.
We, the military ethicists, generally agree that internalization involves more than legalistic service, more than a simplistic adherence to standards. We talk of incorporating values in the self and of making values personally applicable. At the same time, on a more practical level, we recognize (and are reminded every few years) that military academies and accession programs cannot assume exclusive responsibility for the ethical development of our recruits. Coming from rich contexts of their own, our recruits have already developed as ethical agents. They are not empty vessels. Thus we’ve resolved to add formal ethics training to broaden and help explain those contexts, intending the courses at least to contribute to, more hopefully to shape, their future behavior. We also recognize that the ultimate responsibility for education falls more on the student/recruit than the institution, just as ethical behavior is ultimately the responsibility of each individual. The courses are designed to point our students toward that same recognition, or, if they’re already oriented that way, to take them down the path a little farther. Again we turn to Clausewitz, who in the finest military tradition, captures in a few lines the practical essence of the pedagogy passed down since Plato:
No activity of the human mind is possible without a certain stock of ideas; for the most part these are not innate but acquired, and constitute a man’s knowledge. The only question therefore is what type of ideas they should be.
Indeed! And few should disagree with the need to influence and inform, in some principled manner, the actions of our recruits and, recurrently after their accession, of our officers and NCOs. What motivated Plato, in founding the ideal state, to censor certain forms of poetry and music from the guardians (and by analogy, from the virtuous person) persists in motivating us to focus, limit, and concentrate our teaching on the values and characteristics we hold as requirements for military leadership. What motivated Plato was a skeptical eye toward the motivations and capabilities of human beings. His pedagogical model effectively transmits societal standards and promotes adherence to institutional norms, but it may also be described as negative or negating, in the sense that the military ethic—the best human qualities—must be carved from the parent culture and redirected or gapped, as it were, from the society it serves, so that the military man or woman learns to value and personify—internalize—certain ethics or standards even when society might not. We want our officers and recruits to be skeptical in that regard, but only up to the Platonic point; to know, in some sense, better than society, but still agree to serve in a most significant capacity. We want them, in other words, to be deeply conflicted: to be guided by principle and recognize the inadequacies of translation to practice; to see the complexity of understanding among thoughtful, decent people (the people they serve); and finally, to stay close to aporia without losing faith, maintaining a semblance of trust even when it appears their trust is not warranted.
Or, to speak obliquely to the topic of this conference, we want our officers and NCOs to fight wars they may have otherwise learned to identify as unjustified and ill-advised. And we want more from them, because in such wars, the quality and extent of our victory, not whether we will win but how well, is directly related to how our troops weather the daily assaults on their personal convictions.
We have, therefore, some shared vocabulary between the military ethic and the postmodern, at least the beginnings of an awkward exchange. Both are concerned with judgment, responsibility and agency. And both value internalization, an engagement with personal struggle, as necessary to balancing the watery diet of moral absolutes against the muddying realities of daily life. Postmodern doctrine gets us to the struggle but refuses further aid, making us find our own way out, reminding us of our limitations as we weigh evidence and make decisions on how to proceed. That same disposition may have won Socrates his hemlock, but it suits the military mindset. Putting one’s private self on the line and taking risks by challenging personal convictions—by testing that which we internalize—are exercises best carried out before the blunting crush of combat. Most importantly, postmodernism demands we take responsibility for our decisions, our answers, and with equal frankness and attention, for the questions we ask.
James H. Toner; A Message to Garcia:
Leading Soldiers in Moral Mayhem; which appears as chap. 15 in Don M.
Snider & Gayle Watkins’ The Future of
the Army Profession. (
Maj Gregory G. Washington,
 Towards the end of his section on the postmodern, Dr. Toner writes, “We know that ethical egoism doesn’t always clash with military service, but it’s very likely to, especially when the chips are down.”
 When we consider contributive factors such as individual history, experience, culture and personal expectations, the transformational dynamic of military service loses much of its mystique.
 See Baudrillard’s The Gulf War Did Not Take Place; trans. Paul Patton (Bloomington: Indiana University Press, 1995). Sokal’s case is detailed below.
 A few philosophers and their chief “targets”: Lacan (self); Lyotard and Derrida (knowledge); Foucault (history); Baudillard (representation and meaning); Kristeva (gender); Wittgenstein (language). The same could be said of G.E. Moore, as the naturalistic fallacy may be applied to foundational claims (being and knowledge) in addition to morality.
 Alan Sokal; “A Physicist Experiments with Cultural Studies;” Lingua Franca; May/June 1996, pp.62-64. The article revealed his parody paper, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity;” published in Social Text; 46/47 (1996), pp. 217-252.
 Though it is by no means impossible, considering his extensive listing (about 85%) of secondary sources.
 Early on, postmodern philosophers were accused of being neo-conservatives, even fascists, for their criticism of modern humanism; today they are accused of being radical and revolutionary leftists.