Rival Traditions of Character Development:

Classical Moral Philosophy and Contemporary Emprical Science

by David W. Lutz,
University of St. Thomas


A paper prepared for presentation to the Joint Services Conference on Professional Ethics XVII
Washington, D.C. January 25-26, 1996

(The views presented herein are entirely those of the author, and do not represent the official position of the JSCOPE Conference or the Department of Defense.)


I. Fiedler's Empirical Leadership Research

According to Edgar M. Johnson, Director of the U.S. Army Research Institute for the Behavioral and Social Sciences (ARI), it turns out that experience is not the best teacher! This statement occurs in the Foreword to a research report entitled Leadership Experience and Leadership Performance, published by the ARI. The author of that report and former S.L.A. Marshall Chair at the ARI, Fred E. Fiedler, acknowledges that the value of leadership experience seems intuitively obvious, but tells us that there is little empirical evidence that leadership experience contributes to better organizational performance. And Fiedler has himself been conducting empirical research in this field for more than twenty years.

One of the empirical studies cited by Fiedler involved ninety-six groups of three Belgian naval forces personnel. Each group had a designated leader. Half of the groups were led by petty officers with an average of eleven years of naval service after graduation from petty officer candidate school. The other forty-eight groups were led by recruits with less than eight weeks of naval service. After the experiment was concluded, a comparison of mean performance scores of teams led by petty officers and by recruits revealed no significant differences.

Another experiment was conducted with personnel of the Canadian Armed Forces. In this case half of the three-man groups were led by commissioned officers with an average of eight years of military experience, and the other half by recently-inducted recruits with no prior military experience. As in the Belgian study, Fiedler reports, there were no significant differences between the highly experienced officers and the inexperienced recruit leaders.

If an argument is valid (i.e., its conclusion follows logically from its premises) and its premises are true, its conclusion must also be true. Therefore, if we believe an argument is valid and has true premises, we have a reason to believe that its conclusion is true. But what if such an argument has a conclusion that we believe to be false? Does rationality require that we accept the truth of the conclusion, no matter how absurd it seems to be? No. If an argument is valid and has a false conclusion, at least one of its premises is false. And if an argument has true premises and a false conclusion, it is invalid. If an argument has a conclusion that we believe to be false, it is no less rational to look for a problem in its premises or its logical structure than to accept its conclusion. Eighteenth-century philosopher Thomas Reid, the leading figure in the Scottish school of common-sense philosophy, has made this point well:

A traveller of good judgment may mistake his way, and be unawares led into a wrong track; and, while the road is fair before him, he may go on without suspicion and be followed by others; but, when it ends in a coal-pit, it requires no great judgment to know that he hath gone wrong, nor perhaps to find out what misled him.

The conclusion that leadership experience does not significantly improve leadership performance is, to use Reids term, a coal-pit. If the average raw recruit could command soldiers, sailors, airmen, and marines as well as the average senior commissioned officer, we could save a great deal of time and money by allowing raw recruits to command a group of their peers (not to give them experience, only to determine which ones can lead), and then promoting the best leaders to the rank of general or admiral and placing them in command positions. But that would be absurd--as Fiedler himself acknowledges at one point: Who would entrust the command of an Army to someone who has had little or no military experience? Therefore, something has gone wrong, and we may be able to find out what misled us.

In the Belgian naval forces study, each three-man group performed four tasks:

(1) Routing a hypothetical ship convoy through ten ports in the shortest way;

(2) Routing a hypothetical ship convoy through twelve ports in the shortest way;

(3) Writing a letter urging graduating high school students to join the Belgian naval forces as a career; and

(4) Instructing the other group members how to assemble and disassemble a .45 caliber automatic pistol without the use of language.

In the experiment with the Canadian Armed Forces personnel, each group performed three tasks:

(1) Writing a recruiting letter;

(2) Finding the shortest route for a truck convoy; and

(3) Transforming a fictional test-score distribution to standard scores (based on instructions), computing means, and drawing bar graphs.

In the Belgian experiment, four independent judges rated each teams performance on the letter-writing task, and the other three tasks were scored objectively. In the Canadian study, the quality of the recruiting letters were judged by all of the officers in the study (with high inter-rater agreement), and the other two tasks were scored objectively.

Concerning the second experiment, Fiedler writes that the experimental tasks were selected in cooperation with senior Canadian officers in order to make these tasks similar to those officers might be asked to perform as part of their duty. But even if that is true, my own military leadership experience suggests that these are not the kind of tasks in which one person typically leads military subordinates. The three convoy tasks and the test-score task are mathematical problems. If forced to choose between placing my money on groups led by teenagers with no military experience, but ninety-ninth-percentile mathematical aptitude, and placing it on groups led by experienced military leaders with average ability in mathematics, I would go with the whiz kids--but not because I would consider them to be more capable of combat leadership. And I would hope that the leaders would attempt to solve the problems individually, rather than attempting to lead others in solving them, because attempting to discuss the problems with other might distract them from having the mathematical insights necessary to solve the problems.

Similarly, I would expect that writers with no military-leadership experience, but excellent verbal skills, would outperform excellent, seasoned, combat leaders with mediocre writing abilities in the two letter-writing tasks. And how would one lead others in writing a letter? During my peace-time military service I wrote many letters, and sometimes gave drafts of letters I had written to other officers for critical comments. But I was never called upon to lead subordinates in writing a letter.

Fiedler believes it is significant that petty officers did not significantly outperform recruits in the handgun assembly task, a task in which an experienced military man had superior technical knowledge. It is certainly true that someone who did not know how, or had only vague knowledge about how, to assemble and disassemble a pistol would be handicapped in teaching other ho w to do so, with or without the use of language. And it is not inconceivable that one might need to perform such a task in combat, perhaps if introducing an unfamiliar weapon to soldiers of an allied army who spoke a different language. But it does not seem to me that this task has much to do with leadership. Many good instructors are poor leaders.

Therefore, it seems to me that the seven tasks in these two experiments have little to do with military leadership performance, and consequently that the experiments tell us little about the relationship between leadership experience and leadership performance. The well-known data-processing principle, garbage in, garbage out, is relevant to this research project: no matter how sophisticated the statistical manipulation of the research data, since the experiment itself has little to do with military leadership, the same is true of the results. Mathematical reasoning and letter-writing are typically individual activities; winning a war requires a team effort.

But my criticism of this research project is not merely based upon what seems true to me about the experiments relevance to military leadership differing from what seems true to someone else. The more important point is that the experiment itself provides no evidence of a correlation between performance in these seven tasks and military-leadership performance. That is an assumption external to the empirical experiment. And we are all free to compare our confidence in this assumption with our confidence in other assumptions, such as the assumption that military experience tends to improve military performance. And whichever of these assumptions we choose as most plausible, in choosing between them we are doing philosophy, not empirical science. Without making at least one philosophical assumption, we cannot possibly establish a relationship between empirical research and the evaluation of leadership performance.

That Fiedler is doing philosophy, in addition to empirical science, in this research report is indicated by sentences such as To account for these findings, we assume that people seek to capitalize on their strengths and We speculate that intellectual problem-solving cannot co-exist with the automatic fall-back to previously learned behaviors and policies. Without speculations and assumptions, the body of research data would be of little value to us. But speculating and assuming are more than merely guessing. They are outside the domain of empirical research, but nevertheless within the domain of logic. Speculations and assumptions that contradict one another cannot all be true.

Fiedler writes at one point that a set of experimental results showing no statistically-significant correlation between leadership experience and leadership performance is consistent with our everyday observations as well as history... Joan of Arc, the Marquis de Lafayette, Alexander the Great, and William Pitt became outstanding leaders before they were 25 years old. Several of the most effective U.S. presidents, e.g., Abraham Lincoln or Harry Truman, had little managerial experience, while some U.S. presidents with a great deal of leadership experience, e.g., Zachary Taylor, Franklin Pierce, or Herbert Hoover, were among the least effective.

But this is a short list upon which to base an assertion of consistency between the results of twentieth-century leadership experiments and history. Are there not at least as many examples in history of persons who became increasingly-better leaders during the course of a long lifetime of leadership experience?

And despite the results of the Belgian and Canadian experiments, Fiedler does not in fact conclude that there is no significant relationship between leadership experience and performance:

We have found no evidence that leadership experience, however measured, contributes significantly to the leaders or the organizations performance. While this might lead one to believe that experience plays no role in determining leadership performance, this conclusion would run counter to common sense and every learning principle we know. The question is rather, under what conditions does the leaders experience contribute to present and future performance?

Part of his answer to this question is that experienced leaders are more effective under stress than are less experienced leaders. But is this finding consistent with our everyday observations as well as history? Did not Joan of Arc, Lafayette, Alexander, and the others lead under stress? If we add the assumption that combat leadership is stressful--an assumption we know to be correct--Fiedlers position is that we have no evidence that leadership experience contributes significantly to leadership performance, but we do have evidence that leadership experience contributes significantly to leadership performance in combat. Perhaps we should not be hasty in concluding that experience is not the best teacher.

II. The Classical Tradition of the Virtues

Once we see that Fiedler has given us a set of empirical experiments embedded within a confused philosophical argument, we can compare his argument with those of other philosophers. Several of the ancient-Greek philosophers, and most notably Aristotle (384-322 B.C.)--the teacher of the boy who would become Alexander the Great--developed a theory of moral philosophy in terms of excellent character traits, or virtues. Aristotle believed that one can become an excellent person only by performing excellent actions until doing so becomes habitual. In other words, experience is necessary. He makes this point by contrasting virtues and natural capacities:

Of all the things that come to us by nature we first acquire the potentiality and later exhibit the activity (this is plain in the case of the senses; for it was not by often seeing or often hearing that we got these senses, but on the contrary we had them before we used them, and did not come to have them by using them); but the virtues we get by first exercising them, as also happens in the case of the arts as well. For the things we have to learn before we can do them, we learn by doing them, e.g. men become builders by building and lyre-players by playing the lyre; so too we become just by doing just acts, temperate by doing temperate acts, brave by doing brave acts.

Aristotle's teacher, Plato (c. 428-c. 348 B.C.)--whom General MacArthur called "that wisest of all philosophers" --identified four chief or cardinal virtues: wisdom, bravery, temperance, and justice. While he did not include a virtue translatable into English as leadership, his discussion of wisdom shows that he understood it to include excellent leadership. In describing his vision of the ideal Greek city-state, he understood wisdom to include sound judgment, and to be the virtue of the citys political rulers, those who deliberate not about particular matters within the city, but about the city as a whole.

Aristotle distinguished two species of wisdom, which in English can perhaps be best named theoretical wisdom and practical wisdom. When the ideas of the Greek philosophers were translated into Latin, practical wisdom became prudentia, a direct ancestor of the English prudence. Although some English-language writers use prudence as the name of this virtue, doing so can cause confusion, because many contemporary writers contrast prudence and morality. Throughout its history, whatever name was given to it, this virtue was concerned with the good of the person possessing it. But because Plato and Aristotle believed that being ethical was good for oneself, there could be no conflict between prudence and morality. One of the greatest differences between ancient and contemporary moral philosophy is the modern belief that being moral can and often does conflict with one's own good. As one twentieth-century, American moral philosopher puts it: The very raison d'jtre of a morality is to yield reasons which overrule the reasons of self-interest in those cases when everyone's following self-interest would be harmful to everyone. Thus I use practical wisdom, not prudence, to avoid the unintentional suggestion that this virtue could conflict with morality.

When we understand that for Plato and Aristotle ethics was concerned with how to become an excellent person, not with giving us reasons to act contrary to our own good, we can see more clearly the relationship within their tradition between ethics and leadership. The virtue of practical wisdom is concerned with the good of both the leader and those led, and is a trait or state of character of a virtuous, and therefore, ethical leader. And since it, like the other moral virtues, can be acquired only by performing virtuous actions, it cannot be acquired without long-term leadership experience.

The texts of Aristotles ethical theory were lost to the West, but preserved by Islamic scholars, for centuries. Early in the thirteenth century they became available in Western Europe. Many Christians of that century considered Aristotelianism to be incompatible with Christianity. Aristotelian ethics is goal-directed ethics. Aristotle maintained that all human persons desire happiness and perform actions intended to achieve that objective. A life of virtuous actions, directed toward true goods, leads to the ultimate end of true happiness. A life of vicious actions, directed toward merely-apparent goods, leads in the opposite direction. As Alasdair MacIntyre has pointed out, however, what Christianity requires is a conception not merely of defects of character, or vices, but of breaches of divine law, of sins. But the Dominican philosopher-theologian Thomas Aquinas (c. 1225-1274) achieved a synthesis of Aristotelian philosophy and Christian theology, one that had much to say not only about virtues and vices, but also about obedience and disobedience to moral law.

It is quite common today for works in applied ethics to include a brief discussion of ethical theory, saying that there are two types: deontological theories and teleological theories. The standard example of a deontological, or duty-based, ethical theory is that of Prussian philosopher Immanuel Kant (1724- 1804). At the core of Kants theory is his categorical imperative. While he believed there is only one categorical imperative, he stated it variously:

Act only according to that maxim by which you can at the same time will that it should become a universal law.

Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only.

Act according to maxims which can at the same time have themselves as universal laws of nature as their object.

Act by a maxim which involves its own universal validity for every rational being.

Act according to the maxims of a universally legislative member of a merely potential realm of ends.

The standard examples of teleological, or consequence-based, ethical theories are those of British utilitarians such as Jeremy Bentham (1748-1832), John Stuart Mill (1806-73), Henry Sidgwick (1838-1900), and G. E. Moore (1873-1958). Francis Hutcheson (1694-1746), though not usually included among the utilitarians, is responsible for what is probably the best-known statement of utilitarianism: That action is best, which accomplishes the greatest happiness for the greatest numbers. And Mill, who is considered by many to be the chief nineteenth-century advocate of utilitarianism, explains:

The creed which accepts as the foundation of morals utility or the greatest happiness principle holds that actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure and the absence of pain; by unhappiness, pain and the privation of pleasure.

In the over-simplified, but not totally unhelpful, manner in which these options are usually presented in works of applied ethics, deontological theories are concerned only with the rightness and wrongness of actions, not of their consequences, and teleological theories are concerned only with the consequences of actions or rules of action: the end justifies the means. And it is becoming increasingly common, largely because of the influence of MacIntyres work, for authors to mention virtue ethics as a possible third option.

One of the problems of this smorgasbord approach is the suggestion that the various options are different roads to the same destination, different facets of the same diamond, or different points from which to view the same reality. But these options are logically inconsistent with one another. Therefore, either all but one of them are false or all of them are false. Authors and teachers should argue that one of them is true or that they are all false, or failing that, stress the point that no more than one of them can be true. To offer them as options of equal status is to provide academic reinforcement to the vague notion of ethical relativism that most contemporary Western students bring to their formal study of ethics.

These options are not of equal status, because Kantian and utilitarianism theories are merely built around bits and pieces of the Thomistic synthesis that was rejected during the so-called Enlightenment. As MacIntyre has made the point, we now possess the fragments of a conceptual scheme, parts which now lack those contexts from which their significance derived. We possess indeed simulacra of morality, we continue to use many of the key expressions. But we have--very largely, if not entirely--lost our comprehension, both theoretical and practical, of morality.

Aristotelian ethics is primarily teleological, but is also deontological. Human life is spent in the pursuit of happiness. Our ultimate end is being perfectly virtuous, attaining complete happiness, developing fully our potential as members of the human species. But because we can progress toward and attain this end only by acting virtuously, not by acting viciously, not all means can be justified by the end. Actions and consequences cannot be separated from one another in the way they are in many modern ethical theories. Thomistic ethics is also primarily teleological--being is more fundamental than doing --but with its emphasis on moral law, it is even more deontological than Aristotelianism. Only after this understanding of the relationship between actions and their consequences was ripped apart during the Enlightenment did moral philosophers construct from pieces of it what are generally considered today to be the major options in ethical theory.

We can see the teleological and deontological nature of the tradition of the virtues by looking at what it says about the profession of arms. In discussing the ultimate end of man, Aristotle explains that there is a hierarchy of actions, arts, and ends, with some subordinate to others. He illustrates the point with examples: Bridle-making and the other arts concerned with the equipment of horses fall under the art of riding, and this and every military action under strategy. Furthermore, the end of strategy is victory. But this does not mean that any means whatsoever are justified in attaining the end of military victory. Many writers within the tradition of the virtues have written about the distinction between waging war justly and waging war unjustly.

While most discussions of the relationship between war and the virtue of justice focus on the morality of killing other persons, justice is also relevant to the morality of sacrificing one's own life. Military service sometimes leads not only to killing, but also to dying. Within the tradition of the virtues, there are three species of the virtue of justice, each of which brings order to one kind of relationship within a political body. Commutative justice orders the relations of individuals to one another; distributive justice orders the relations of the social whole to individuals; and legal or general or contributive justice orders the relations of individuals to the social whole.

Social philosopher Heinrich Rommen, writing within the tradition of the virtues, explains that legal justice requires from the individual, as member, loyalty and allegiance, even the sacrifice of life when the sacrifice of one's life is necessary under given circumstances and on account of one's specific function in the whole. He explains why this is true in terms of the common good, an important term and concept within the tradition of the virtues: To sacrifice even life is a clear duty of legal justice, because the life of the community in the order of the common good is a higher form of life than is the bodily life of an individual. But this raises a difficult question: Even if sacrificing one's life is a duty of the virtue of justice, why should the soldier do what duty requires instead of doing what it is best for himself? Rommen presents the virtue tradition's answer to this question, one that seems strange to many in our century: The sacrifice of one's own life can have meaning only if by it man serves his own good, too. That would mean that in reality there is no conflict between the common good and the rightly understood private good. Just as, with the rejection of virtue ethics, we now frequently oppose prudential and moral reasons for acting, we also frequently see disharmony between the individual good and the common good. The utilitarians greatest happiness for the greatest numbers is the virtue theorist's common good separated from a theory that explains why it is good for oneself to promote the common good. Because it begins with individual persons, human atoms, it cannot satisfactorily answer the question: Why should I act to increase the utility of other persons when doing so would decrease my own utility, because it would mean sacrificing my own life?

The most distinctive virtue of soldiers for the ancient-Greek philosophers is bravery or courage (or fortitude, tenacity). According to Aristotle, he will be called brave who is fearless in face of a noble death, and of all emergencies that involve death; and the emergencies of war are in the highest degree of this kind. Aristotles description of this virtue stresses its teleological character:

The end of every activity is conformity to the corresponding state of character. This is true, therefore, of the brave man as well as of others. But courage is noble. Therefore the end also is noble; for each thing is defined by its end. Therefore it is for a noble end that the brave man endures and acts as courage directs.

Aristotle says more about why it is good, not only for others, but for the brave soldier himself, to give his life on the battlefield within his discussion of self-love. He explains that there are two ways of understanding the relationship between loving oneself and performing virtuous actions. Most people ascribe self-love to people who assign to themselves the greater share of wealth, honours, and bodily pleasures. Such persons deserve reproach, Aristotle believes, but are not true lovers-of-self, because the goods they seek are inferior goods. Such actions do not benefit others, but also do not bring happiness to oneself.

The true lover-of-self seeks higher-order goods for himself, and in doing so benefits other persons. Aristotle illustrates this principles by describing the courageous soldier:

The good man should be a lover of self (for he will both himself profit by doing noble acts, and will benefit his fellows), but the wicked man should not; for he will hurt both himself and his neighbours, following as he does evil passions. For the wicked man, what he does clashes with what he ought to do, but what the good man ought to do he does; for reason in each of its possessors chooses what is best for itself, and the good man obeys his reason. It is true of the good man too that he does many acts for the sake of his friends and his country, and if necessary dies for them; for he will throw away both wealth and honours and in general the goods that are objects of competition, gaining for himself nobility; since he would prefer a short period of intense pleasure to a long one of mild enjoyment, a twelve-month of noble life to many years of humdrum existence, and one great and noble action to many trivial ones. Now those who die for others doubtless attain this result; it is therefore a great prize that they choose for themselves.

One of the consequences of rejecting the virtues tradition is that the understanding of self-love held by the majority in Aristotle's day, but that he and other moral philosophers in his tradition understood to be mistaken, has become a point of agreement among most contemporary moral philosophers, even as they disagree with one another strongly at other points. And it should be noted that Aristotles argument does not rest upon belief in reward and punishment after death. Whatever he may have believed about life beyond the grave, his claim is that it is better, in terms of earthly life alone, to die while performing a heroically-virtuous action than to live to an old age with the memory of having been a coward.

Furthermore, the change in moral theories understanding of the relationship between acting rightly and loving oneself did not occur when the Aristotelian and Christian traditions were synthesized in the thirteenth century. While most Jews and Christians, and many others, believe one should love one's neighbor as oneself, there is disagreement in interpreting the commandment. One writer, speaking for many others, believes that this maxim stresses love, not self-interest as a moral base of conduct. But this interpretation stands in contrast to that of Aquinas, arguably the most important theologian in the Christian tradition: It seems to follow [from this commandment] that man's love for himself is the model of his love for another. But the model exceeds the copy. Therefore, out of charity, a man ought to love himself more than his neighbor. The change in understanding this commandment came much later in the Christian tradition, at the time of the Enlightenment.

Aristotle distinguishes the truly brave soldier from five other soldiers that are frequently, but mistakenly, called brave. One of the five is the experienced, mercenary soldier. Aristotle explains that such soldiers appear to be brave, because there are many empty alarms in war, of which [mercenaries] have had the most comprehensive experience; therefore they seem brave. Further more, their experience makes them most capable in attack and in defence, since they can use their arms and have the kind that are likely to be best both for attack and for defence. These soldiers turn cowards, however, when the danger puts too great a strain on them and they are inferior in numbers and equipment; for they are the first to fly. But this contrast between brave soldiers and mercenaries is not a weakening of Aristotles position that acquiring a virtue requires habituation over time. He is not saying that one can become brave without experience, but only that one can have experience without being brave. This is true because both virtues and vices are acquired by experience: By doing the acts that we do in the presence of danger, and being habituated to feel fear or confidence, we become brave or cowardly.

While the primary virtue pertaining to soldiers is bravery, and while Plato and Aristotle focused on (practical) wisdom's role as a virtue of individuals concerned with their own good and as a virtue of statesmen concerned with the good of a political body, Aquinas also identified military practical wisdom as a species of that virtue. He explains that the execution of military service belongs to [bravery], but the direction, especially in so far as it concerns the commander in chief, belongs to [practical wisdom]. And the practical wisdom of political leaders and of military commanders are both concerned with the common good, but in different ways: . . . in those things also which are in accordance with reason, there should be not only political [practical wisdom], which disposes in a suitable manner such things as belong to the common good, but also a military [practical wisdom], whereby hostile attacks are repelled.

A distinction Aquinas makes in this context between military and commercial activity is worth noting, because it sheds light on the reason the profession of arms is a profession. He observes a possible objection to his claim that there is a species of the virtue of practical wisdom pertaining to military leadership:

Just as military business is contained under political affairs, so too are many other matters, such as those of tradesmen, craftsmen, and so forth. But there are no species of [practical wisdom] corresponding to other affairs in the state. Neither therefore should any be assigned to military business.

Aquinas then responds to the objection: Other matters in the state are directed to the profit of individuals, whereas the business of soldiering is directed to the protection of the entire common good. The essential distinction between those occupations that are professions and those that are not is that the former, but not the latter, serve the common good. Medicine is a profession; the world's oldest profession is not. Therefore, Aquinass distinction in this passage implies that military service is a profession, but that business is not. And that is how those two occupations have in fact been considered throughout most of Western history. It is not the case, however, that all apparent members of the professions are true professionals. Just as the physician who encourages terminally-ill persons to commit suicide and assists them in doing so is not a professional, because he or she is not providing a service, neither is the soldier who is more concerned about his or her career than about defending the nation. Likewise, professional soldiers are distinguished from warlords by the objectives they strive to attain. Moreover, while mercenaries are often called professional soldiers, their status as true professionals is determined by the characteristics (virtues or vices) of the wars in which they fight and by the ordering of their motives in fighting. Within any profession, those whose ultimate objectives are economic are not true professionals.

It is also worth noting that the list of virtues discussed by Aristotle includes truthfulness, which is essential to the profession of arms and to the honor codes of the American service academies. And he includes theft in a list of actions that are always wrong.

This brief glimpse at classical moral philosophy can be summarized by observing that it is a theory of both morality and moral development. The means and the end are necessarily related to one another. There is only one way to become a virtuous person: to perform virtuous actions until the habit of doing so becomes a trait of one's character. But while experience is a necessary condition for being virtuous, it is not a sufficient condition, because the way to become vicious is to perform vicious actions until the habit of doing so becomes a trait of ones character.

III. Kohlbergs Empirical Research Project

The moral tradition of Plato, Aristotle, and Thomas Aquinas was rejected during the Enlightenment. And if MacIntyres position--which I believe is consistent with the evidence available to us--is correct, we now have a plurality of ethical theories constructed from fragments of that earlier tradition. We also have a new theory of moral development, quite different from Aristotle's. The chief figure in the new moral development theory is psychologist Lawrence Kohlberg. And Kohlberg's work was influenced strongly by that of Swiss psychologist Jean Piaget.

Piaget and Kohlberg belong to the cognitive-developmental tradition of developmental psychology. A key concept of this tradition is that moral reasoning and its development occur naturally or spontaneously (without formal training) when people interact with others in social groups. As Piaget makes the point, the child's attitude takes shape without any direct influence on the part of the adult. We already see a sharp contrast with the virtues tradition. According to Aristotle, one of the objectives of a practically-wise political leader is to enact legislation that will contribute to the moral formation children:

It is difficult to get from youth up a right training for virtue if one has not been brought up under right laws; for to live temperately and hardily is not pleasant to most people, especially when they are young. For this reason their nurture and occupations should be fixed by law; for they will not be painful when they have become customary.

Piaget conducted his research by observing school children at play and asking them questions about the rules of their games and certain hypothetical ethical dilemmas. After interviewing nearly one hundred children between the ages of six and twelve, Piaget concluded that he had identified three great periods in the development of the sense of justice in the child. One period lasting up to the age of 7-8, during which justice is subordinated to adult authority; a period contained approximately between 8-11, and which is that of progressive equalitarianism; and finally a period which sets in towards 11-12, and during which purely equalitarian justice is tempered by considerations of equity.

Piaget's research, like Fiedler's, is a combination of empirical experiments and philosophical mistakes. He maintains that all morality consists in a system of rules, and the essence of all morality is to be sought for in the respect which the individual acquires for these rules. But this is false, because Aristotles morality (to cite one of many examples) does not consist in a system of rules. Rules play an extremely minor role in his theory. Acquisition of the virtue of practical wisdom enables one to make excellent decisions in situations so complex that no existing system of rules is capable of providing guidance.

Piaget writes: Our study of the rules of a game led us to the conclusion that there exist two types of respect, and consequently two moralities--a morality of constraint or of heteronomy, and a morality of cooperation or of autonomy. But the distinction between heteronomy and autonomy plays a central role in Kant's moral theory. Piaget could not have reached this conclusion on the basis of empirical research if Kant had not previously made the distinction within a conceptual, non-empirical argument. Moral philosopher Stephen Toulmin comments, "Piaget's account of psychological development is based quite as much on concepts derived from the Kantian philosophy in which he was first trained, as on the results of his own experiments and observations; so that it needs to be criticized as much on conceptual as on empirical grounds."37

Kohlbergs work relies heavily on Piaget's: Our psychological theory of morality derives largely from Piaget, who claims that both logic and morality develop through stages and that each stage is a structure which, formally considered, is in better equilibrium than its predecessor. He identifies three levels of moral development: preconventional, conventional and postconventional. And, like Piaget, he believes he can locate approximately the ages at which persons normally reach these various levels:

The preconventional moral level is the level of most children under 9, some adolescents, and many adolescent and adult criminal offenders. The conventional level is the level of most adolescents and adults in our society and in other societies. The postconventional level is reached by a minority of adults and is usually reached only after the age of 20.

Kohlbergs theory is more finely tuned than Piaget's, however, in that each of his levels has two stages.(See Table.)

Kohlberg makes several claims about these stages of moral development. Their sequence is invariant:

>Under all conditions except extreme trauma, movement is always forward, never backward. Individuals never skip stages; movement is always to the next stage up. Their existence is empirically verifiable: The question of whether cognitive stages exist . . . is an empirically testable question. And they are found in all cultures: Over a period of almost twenty years of empirical research, my colleagues and I have rather firmly established a culturally universal invariant sequence of stages of moral judgment.

Kohlberg and his colleagues have conducted empirical research in Britain, Canada, Honduras, India, Israel, Mexico, Taiwan, and Turkey, as well as in Chicago.

Kohlbergs method of empirical research involves presenting persons with hypothetical ethical dilemmas and then asking questions. Kohlbergs stages were defined in terms of free responses to ten hypothetical moral dilemmas.21 But while the answers may be free, the questions are based upon philosophical assumptions and contain concepts that belong to certain sets of ethical theories, but not to others. The research project is designed in such a manner that the results are inevitable.

One of Kohlbergs dilemmas involves a man who develops a drug that might be able to cure a woman's rare form of cancer, but charges far more than it cost him to produce the drug and far more than the woman's husband can pay for it. And one of Kohlberg's questions is whether the druggist has the right to charge such a high price if doing so was legal. And he states elsewhere that there are certain basic categories used by every moral stage-structure or every moral theory, such as the categories of rights and duties. But historian Richard Tuck has dated the first theory of natural rights at approximately the year 1350, seventeen centuries after the deaths of Plato and Aristotle. And we know of no origin of rights in any non-Western moral tradition. Rights appear in Kohlbergs fifth and sixth stages of moral development. But what could Kohlberg possibly learn about the level of moral development of a true Platonist or Aristotelian (or a true Buddhist or Confucian) by asking questions about the existence of rights? Does the absence of the words virtue and vice from Kohlbergs table of moral-developmental stages tell us more about moral development or Kohlbergs questions?

Kohlberg agrees with sociologist Emile Durkheim that the second element of morality is altruism, an impersonal or unselfish end. But we have already seen in discussions of prudence, individual good and common good, and self-love a fundamental difference between virtue ethics and modern ethics in understanding the relationship between ethics and the self. MacIntyre points out that it was in the seventeenth and eighteenth centuries that morality came generally to be understood as offering a solution to the problems posed by human egoism and that the content of morality came to be largely equated with altruism. Again, Kohlberg is biased in the direction of ethical theories of relatively-recent origin. He assumes in advance that the highest ethical theories are modern ethical theories, and that the highest is Kantianism as interpreted by his Harvard colleague, philosopher John Rawls, and then constructs the research project in such a way that these theories come out on top.

And Kohlberg makes little attempt to conceal his Kantian assumptions. While he frequently claims that his stages have been empirically verified, he almost as frequently claims the superiority of Kantian ethical theory on non-empirical grounds: The tradition of moral philosophy to which we appeal is the liberal or rational tradition, in particular the formalistic or deontological tradition running from Immanuel Kant to John Rawls. Therefore, while he claims that anyone who interviewed children about moral dilemmas and who followed them longitudinally in time would come to our six stages and no others and that the correctness of the stages as a description of moral development is a matter of empirical observation and of the analysis of the logical connections in childrens ideas, not a matter of social science theory, his confidence is not justified. He has failed to distinguish empirical observations from assumptions about which ethical theories are superior.

Another problem with Kohlbergs project is his inability to distinguish progress from regress. Even if he could discover by empirical research an invariant sequence of moral stages, how would he know these were stages of moral development? He sometimes acknowledges that his method cannot determine which stage is superior: Psychology finds an invariant sequence of moral stages; moral philosophy must be invoked to answer whether a later stage is a better stage. More frequently, however, he assumes that stages later in the sequence are morally superior. At one point he claims to have successfully defined the ethically optimal end point of moral development. But that cannot be accomplished by empirical psychology.

Piaget's and Kohlberg's research projects are fundamental and fatally flawed for the same reason Fiedler's is: all three scholars attempt to accomplish by empirical research what cannot be accomplished by empirical research. They make philosophical assumptions and consequently present some empirical observations embedded within confused philosophical arguments. Nevertheless, they have an appeal that philosophical arguments do not, because they are scientific. A collection of papers entitled A Century of Psychology as Science is introduced by the sentence:

As is well known, the centenary of [Wilhelm] Wundts establishment [in 1879] of the Leipzig laboratory was taken by the American Psychological Association (and other groups of psychologists within and outside of the United States) as occasion for the formal celebration of the hundredth anniversary of scientific psychology.

What actually happened in the nineteenth century, however, was not that psychology became scientific, but rather that psychologists claimed the status of empirical scientists. Psychology had been a science for more than two millennia before Wundt was born. The nineteenth-century innovation was to attempt to pattern psychology after the natural sciences. If all human behavior were determined, it would be possible, in principle, to learn all we can learn about psychology by empirical research. The question whether all human behavior is determined is itself one that cannot be answered by empirical research. For purposes of thinking about the character development of military leaders, though, we should assume that determinism is false. If determinism is true, our assumption itself was determined to be false by events that took place before we were born. But if determinism is false, we have made the correct assumption. And the assumption that we have free will is especially appropriate in this context, because decision-making is such an important component of military leadership.

And it should be noted that while research in moral philosophy is not typically quantitative, it is no less rigorous than statistical research methods. It is not an historical accident that some of the greatest philosophers--Pythagoras, Descartes, Pascal, Leibniz, Whitehead, Russell--were also important mathematicians. In order for a moral-philosophical argument to be sound, it must be valid. And logic is a sub-discipline of both mathematics and philosophy.

IV. Kohlbergs Moral Theory

When we see that Kohlberg is doing moral philosophy, not just empirical science, we can compare his moral theory to those of moral philosophers. We have already seen that there is no mention of virtues or vices within his stages of moral development. The following is part of his explanation of his low regard for virtue ethics:

The emphasis on moral virtues that are acquired by habit derives from Aristotle, whose bag of virtues included temperance, liberality, pride, good temper, truthfulness, and justice. [Hugh] Hartshorne and [Mark] Mays bag included honesty, service, and self-control. The Boy Scout bag is well known--a Scout should be honest, loyal, reverent, clean, and brave. My quick tour through the ages indicates that the trouble with the bag of virtues approach is that everyone has his own bag. The problem is not only that a virtue like honesty may not be high in everyones bag, but that my definition of honesty may not be yours.

But the same can be said about the self-chosen ethical principles that make it into Kohlbergs Stage 6. We can and do frequently disagree about which principles are important and about how they are to be interpreted. This is such a poor argument that it almost seems unfair to criticize it, because one wonders whether Kohlberg intended it to be taken seriously.

Persons at Kohlbergs two lowest stage of moral development are characterized in terms of their own interests and their regard or lack of regard for the interests of others. All ethical theories worth taking seriously agree that those unconcerned with the interests of others are morally immature. But Kohlberg does not address the question of whether promoting the interests of others might be the best way of promoting one's own interests. He sees a world of conflicting interests, in which there are many atomistic, individual goods, but no common good. In other words, he holds the view that Aristotle ascribes to the majority of citizens of his day, but that Aristotle himself holds to be mistaken.

The most striking thing about Stage 3 is that it includes belief in the Golden Rule, but is nevertheless three stages below the stage characterized by self-chosen ethical principles. Persons advancing to Stage 4 leave the Golden Rule behind and take the point of view of the system that defines roles and rules.

The utilitarian and social-contractarian traditions are found in Stage 5. Utilitarianism seeks to provide the isolated individual with reasons to promote the interests of others, instead of focusing exclusively on his or her own. Most attempts amount to little more than saying that anyone else's utility is just as important as one's own. One could agree with that, and still see no reason to stop focusing on self-interest.

The principal members of the social contract tradition are Thomas Hobbes, John Locke, and Jean-Jacques Rousseau. Whereas Aristotle believes that man is by nature a political animal, Hobbes maintains, in opposition to Aristotle, that in the natural condition of mankind life is solitary, poor, nasty, brutish, and short. The social contractarians argue that it will be better for each of us if we voluntarily give up some of our rights, form a social contract, and promote the interests of others. But one could reason that while it would be better for all if all gave up some of their rights and promoted the good of all, it would be even better for himself if everyone else promoted the good of all and he promoted his own good. Any theory that divorces the individual good from the common good is hard-pressed to solve the problem of moral motivation.

Finally, Stage 6 is roughly Kohlberg's interpretation of Rawl's interpretation of Kant. But following self-chosen ethical principles can mean many different things, depending upon what principle one chooses for oneself. And there is great disagreement among experts, as well as non-experts, about the meaning of phrases such as the equality of human rights, respect for the dignity of human beings as individual persons, the moral point of view, the nature of morality, and persons as ends in themselves.

V. Conclusion

In looking for a theory of character development for military leaders, we have two major options. The virtue ethics tradition of Plato, Aristotle, and others is both a theory of morality and a theory of moral development. One acquires the virtues by acting virtuously. Becoming virtuous is good both for other persons and for oneself (though doing what is good for oneself may lead to one's death on the battlefield). This tradition assumes that we have free will, and one of the virtues, military practical wisdom, enables one to make excellent decisions about the military protection of the common good. While this is a non-quantitative tradition, it is no less intellectually rigorous than empirical and mathematical research methods. And while a solid foundation has been laid, there is still work to be done to increase our understanding of the excellent military commanders virtues and the way they are acquired.

The other option begins with philosophical assumptions--some acknowledged and some unacknowledged--about what it means to be an effective leader or a morally-mature person, and then attempts to increase our knowledge by using research methods patterned after those of the natural sciences. It produces results that have the appeal of empirical evidence, but that suffer from the philosophical mistakes upon which the empirical research project itself is based.

Our choice of a tradition is no trivial matter. And I believe it is clear which tradition we should choose.


Table. Lawrence Kohlbergs Six Moral Stages

Level I: Preconventional

Stage 1: Heteronomous Morality

Content of Stage:

What is right. To avoid breaking rules backed by punishment, obedience for its own sake, and avoiding physical damage to persons and property.

Reasons for doing right. Avoidance of punishment, and the superior power of authorities.

Social Perspective of Stage: Egocentric point of view. Doesn't consider the interests of others or recognize that they differ from the actors; doesn't relate two points of view. Actions are considered physically rather than in terms of psychological interests of others. Confusion of authoritys perspective with one's own.


Stage 2: Individualism, Instrumental Purpose, and Exchange

Content of Stage:

What is right. Following rules only when it is to someones immediate interest; acting to meet ones own interests and needs and letting others do the same. Right is also whats fair, whats an equal exchange, a deal, an agreement.

Reasons for doing right. To serve ones own needs or interests in a world where you have to recognize that other people have their interest, too.

Social Perspective of Stage: Concrete individualistic perspective. Aware that everybody has his own interest to pursue and these conflict, so that right is relative (in the concrete individualistic sense).


Level II: Conventional

Stage 3: Mutual Interpersonal Expectations, Relationships, and Interpersonal Conformity

Content of Stage:

What is right. Living up to what is expected by people close to you or what people generally expect of people in your role as son, brother, friend, etc. Being good is important and means having good motives, showing concern about others. It also means keeping mutual relationships, such as trust, loyalty, respect, and gratitude.

Reasons for doing right. The need to be a good person in your own eyes and those of others. Your caring for others. Belief in the Golden Rule. Desire to maintain rules and authority which support stereotypical good behavior.

Social Perspective of Stage: Perspective of the individual in relationships with other individuals. Aware of shared feelings, agreements, and expectations which take primacy over individual interes Therefore, it seems to me that the seven tasks in these two experiments have little to do with military leadership performance, and consequently that the experiments tell us little about the relationship between leadership experience and leadership performance. The well-known data-processing principle, garbage in, garbage out, is relevant to this research project: no matter how sophisticated the statistical manipulation of the research data, since the experiment itself has little to do with military leadership, the same is true of the results. Mathematical reasoning and letter-writing are typically individual activities; winning a war requires a team effort.

But my criticism of this research project is not merely based upon what seems true to me about the experiments relevance to military leadership differing from what seems true to someone else. The more important point is that the experiment itself provides no evidence of a correlation between performance in these seven tasks and military-leadership performance. That is an assumption external to the empirical experiment. And we are all free to compare our confidence in this assumption with our confidence in other assumptions, such as the assumption that military experience tends to improve military performance. And whichever of these assumptions we choose as most plausible, in choosing between them we are doing philosophy, not empirical science. Without making at least one philosophical assumption, we cannot possibly establish a relationship between empirical research and the evaluation of leadership performance.

That Fiedler is doing philosophy, in addition to empirical science, in this research report is indicated by sentences such as To account for these findings, we assume that people seek to capitalize on their strengths and We speculate that intellectual problem-solving cannot co-exist with the automatic fall-back to previously learned behaviors and policies. Without speculations and assumptions, the body of research data would be of little value to us. But speculating and assuming are more than merely guessing. They are outside the domain of empirical research, but nevertheless within the domain of logic. Speculations and assumptions that contradict one another cannot all be true.

Fiedler writes at one point that a set of experimental results showing no statistically-significant correlation between leadership experience and leadership performance is consistent with our everyday observations as well as history... Joan of Arc, the Marquis de Lafayette, Alexander the Great, and William Pitt became outstanding leaders before they were 25 years old. Several of the most effective U.S. presidents, e.g., Abraham Lincoln or Harry Truman, had little managerial experience, while some U.S. presidents with a great deal of leadership experience, e.g., Zachary Taylor, Franklin Pierce, or Herbert Hoover, were among the least effective.

But this is a short list upon which to base an assertion of consistency between the results of twentieth-century leadership experiments and history. Are there not at least as many examples in history of persons who became increasingly-better leaders during the course of a long lifetime of leadership experience?

And despite the results of the Belgian and Canadian experiments, Fiedler does not in fact conclude that there is no significant relationship between leadership experience and performance:

We have found no evidence that leadership experience, however measured, contributes signi