The Press in a Free Society

This recording combines two radio interviews in which Ayn Rand responds to questions from university students about the role of the press in a free society. Rand touches on a variety of topics including the role of objectivity in news reporting, the importance of freedom of the press, the immorality of laws requiring “equal time” for opponents of a broadcaster’s editorial policy, why media coverage of the Vietnam War was poor, why a free press is crucial for fair and public trials, and why government licensing of TV and radio stations is a form of censorship.

This program lasts 57 minutes.

The Intellectual Bankruptcy of Our Age

In this 1962 recording, Ayn Rand argues that America’s intellectuals have defaulted on their responsibility to understand and defend capitalism. Rand contends that intellectuals failed to grasp the source of businessmen’s productivity and the destructive effects of collectivist schemes implemented by government coercion. By failing to uphold the value of individual liberty, intellectuals paved the way for authoritarian states and the decline of freedom in the twentieth century.

In a separate program, Rand answers questions prompted by her talk, addressing such topics as charity in a free society, the nature and evils of altruism, what she had in common with liberals, Nietzsche’s subjectivism, economic regulation, pollution remedies and the need for television airwaves to be recognized as private property.

The lecture lasts 50 minutes, and the Q&A program lasts 32 minutes.

 

Q&A on Objectivism

In this radio interview, Ayn Rand answers a wide variety of questions about philosophy in general and her own philosophy, Objectivism. Subjects addressed include the usefulness of philosophy for men of differing abilities, the role of philosophy in Rand’s novels, the purpose of morality, the process of gradual philosophical change, the dominant philosophies of the nineteenth and twentieth centuries, the inability of nineteenth-century thinkers such as Mill, Bentham and Spencer to defend capitalism, the intellectual developments behind the founding of the United States, and how interested individuals can help move the culture in a more rational direction.

The recording is 28 minutes long.

The “Conflicts” of Men’s Interests

In this radio address, Ayn Rand reads aloud her 1962 essay entitled “The ‘Conflicts’ of Men’s Interests” and offers additional commentary. Rand starts by raising a “typical” question that she has received: “Suppose two men apply for the same job. Only one of them can be hired. Isn’t this an instance of a conflict of interests, and isn’t the benefit of one man achieved at the price of the sacrifice of the other?”

Rand then steps back to examine in depth the four interrelated considerations (reality, context, responsibility and effort) that inform a rational person’s view of self-interest. Finally, she explains how each of the four considerations bears on whether two men competing for the same job have an actual conflict of interests.

The original essay first appeared in the August 1962 issue of The Objectivist Newsletter and was later anthologized in The Virtue of Selfishness (1964).

This radio program lasts 24 minutes.

Issues in Education

In this 1964 radio program, Ayn Rand gives her views on education, including the causes of its decline and the principles that should guide rational educators. Rand discusses such issues as the essential purpose of education, the “flight from reason” caused by philosophers such as Kant, the links between altruism and irrationalism in education, the necessity of guiding children toward intellectual independence, the evils of dogma and indoctrination, the difference between fact and interpretation, some essentials of a rational curriculum, the impropriety of forced racial segregation and integration, the parental “responsibility and privilege” to guide their children’s education, and the chaos of college education featuring a “different epistemology in every classroom.”

The program lasts 29 minutes.

Why Should One Act on Principle?

This lecture was delivered at Boston’s Ford Hall Forum on April 24, 1988, then published in the February 27, 1989, issue of The Intellectual Activist and later anthologized in Why Businessmen Need Philosophy: The Capitalist’s Guide to the Ideas Behind Ayn Rand’s “Atlas Shrugged” (2011).

 

There is no bromide more common today than the statement that we live in a “complex” world. Whatever the subject of discussion, this claim is routinely offered at the outset as a kind of magic incantation and all-purpose depressant. Its effect is not to inspire people to think, but to induce a sense of helplessness, weariness, hopelessness. It is used not to solve problems, but to assure people that there are no solutions.

The past, our cultural spokesmen often suggest, was different; once upon a time we could find answers to our questions and know what to do, but no longer. Life is just too complicated now for — here is the dread word — “simple” answers. The word “simple” itself has become the basis of a whole new condemnation, contained in the modern term “simplistic.” When I argue with people, I hear all kinds of attacks from them thanks to my Objectivist views — I am selfish, impractical, too idealistic, atheistic — but the commonest attack by far is: you are being “simplistic.”

“Simplistic” is not the same as “oversimplified.” If you accuse someone of “oversimplifying,” you imply that it is all right to simplify, but that one must do it rationally, not leaving out important factors. The modern charge “simplistic” conveys the notion that it is not merely an issue of some omitted factor; it implies that the simple, the simple as such, is naive, unrealistic, bad. The term is an anti-concept intended to smuggle into our minds this idea: you have simplified something and by that very fact you have erred, distorted, done wrong. This amounts to legislating simplicity out of existence. I call this attitude “complexity-worship” — and it is everywhere today.

How should we deal with all the “complex” situations we encounter, according to the conventional wisdom? The answer implicit in today’s practice is: by disintegration. That is: break up the initial problem into many parts, then throw most out as too complicated to consider now, then throw some more out. Keep eliminating aspects until finally you get a narrow concrete left on the table to argue about.

Suppose, for example, that some American businessmen are upset about Japanese sales in the U.S., which they feel are cutting into their own sales, and they go to the government for relief. Of course, if they came to me, I would say: you must decide whether you advocate the principle of free trade or the principle of protectionism. Then I would offer a proof of the evils of protectionism, showing why it will harm everyone in the long run, American businessmen included, and why the principle of free trade will ultimately benefit everyone. That would be the end of the dilemma, and the people demanding tariffs would be sent home packing.

But this kind of analysis would be ruled out today by any congressional committee or academic commission studying economic problems. We cannot be “simplistic,” they would say; we cannot talk in generalities like “free trade” or “protectionism.” How, they would ask, can we possibly make sweeping statements on this level, which involve every country, every product, every group of consumers and producers, every era of history? Life is just too complex for that. What then do we do in the face of such complexity? Basically, they answer, we have to narrow our focus profoundly. We must not talk about free trade in general, but free trade with Japan — and not Japanese industry as a whole, of course, but only Japanese cars; we’ll have to leave computers and TV sets for another committee to wrestle with. And we’ll have to leave trucks out, since that introduces too many tricky factors; automobiles are enough to worry about — but maybe we should include small pickup trucks, because they’re pretty close to cars; let’s farm that one out to a subcommittee to study separately — and of course we’re not talking about forever here or even ten years. We’ll confine ourselves to a year, say, or even just this season, and we’ll renegotiate the issue the next go-round. In the end, the question being debated is not: should we adopt a policy of free trade with foreign countries? but rather: should we place a 30 percent import duty on certain kinds of Toyotas and Datsuns for the next six months?

Now, we are told, the question is not “simplistic.” Unfortunately, now it is also not rationally answerable. How is one to decide what to do in this case, once one has thrown out the appeal to principles as naive? The answer is: you hold hearings, and all the lobbyists involved scream, bribe or make threats, and everybody offers contradictory compromises. The Toyota people say that 30 percent is unfair, but if we cut it to 20 percent they will try “voluntarily” to sell less in the U.S. The Chrysler people insist that this is not good enough, but maybe they can pay their workers more if Toyota is really squelched — so the labor unions jump in and demand a crackdown on Toyota, while the consumer groups are busy demanding more of the cheaper Japanese cars. What finally comes out of it all? Some range-of-the-moment deal — a “moderate” squeeze on the Japanese answered by a new Japanese retaliation against us, a new government subsidy to Detroit, a new agency to help consumers finance auto loans, a bigger budget deficit and another committee to review the whole situation next month or year. After all, we are told, no policy is set in stone. There are no absolutes. We have to be “flexible” and “experimental.”

Philosophically, this is called pragmatism. In this approach, there are no principles, like “free trade” or “protectionism”; there are only concretes, like Toyotas or Chryslers, and groups of people who fight over them with opposite desires. So the only solution is to find some temporary expedient that will appease the loudest screamers for the moment — and then take a drink until the whole mess erupts again.

It is no wonder that people who employ this method believe that life is complex and that there are no answers to any problems. Yet the paradox is that they use this method because, they insist, life is too complex for us to rely on principles.

Some philosophical thought is clearly in order here. Is life complex? If so, does man have a rational (as against a pragmatic) means of dealing with its complexity? If so, do our leaders fail as badly as they do because they are rejecting man’s proper means of dealing with complexity? My answer to all these questions is a resounding yes. My thesis this evening is: life is complicated, enormously so; but man has a conceptual faculty, a faculty of forming principles, which is specifically his weapon for coping with complexity. Yet our leaders, thanks to centuries of bad philosophy, distrust and reject this faculty, and are therefore helpless to lead or to know what to do.

 

Let us begin by defining “complex.” “Complex” is a quantitative idea; the “complex” is that which involves many elements or units, all tied together or interrelated. The “simple,” by contrast is that which involves one, or at most a few, units. For example: if the officials of the Ford Hall Forum want to attract a large audience, they have to grapple with many different issues: whom should they invite? does he have to be famous? what should he speak about? will he agree to come? can he condense his talk into 50 minutes? how will he fit into the rest of the year’s program? This is a relatively complex problem. By contrast, if the audience is here on the night of the talk, clamoring at the doors, and someone inside asks: what do we do now? — that is a simple problem, the solution being to open the doors and let the people in. Here we have no complexity; there is only one element to deal with.

Now the first thing to note is that human life is inherently complex. Contrary to all the propaganda we hear, this is not a distinctively modern problem. It is not a result of the Industrial Revolution, the growth of population or the fact of worldwide communication. All these developments have brought certain new factors into our lives, but they have also removed problems. They have given each of us in many contexts fewer units to think about and have thus made life simpler. Consider, for example, the utter simplicity of feeding yourself today via a trip to the supermarket to buy some frozen food, as against the situation in medieval days. Think how many different questions and separate tasks would have been involved in that era for you merely to reach the point of having a dinner on the table fit to eat.

Man’s life is complex in every era, industrial or not. He always has countless choices to make, he has the whole world spread before him, he must continually make decisions and weigh results keeping in mind a multiplicity of factors. Even in the most primitive times, the caveman had to decide what to hunt, what risks to take, what weapons to use, how to make them, how to protect his kill, how to store, preserve, apportion it. And he had to do all this long before there was any science, long before there were any rulebooks to guide him in all these activities. In his context of knowledge, stalking his prey was an enormous complexity, no easier for him than our hardest problems in our advanced context are for us to solve.

‘‘Simplicity,” in the absolute sense, is the prerogative only of animals. Animals function automatically to sustain themselves; they are programmed to act in certain ways without the need to work, produce wealth, choose among alternatives, weigh results. They merely react to some dominant sensation in a given situation; a dog, for instance, smells his bone and runs to get it. What could be simpler? But man cannot survive by reacting mindlessly to sensations.

No human being can escape the problem of dealing with complexity and somehow making it simple and therefore manageable. This applies to the modern pragmatists, too, who make such a fetish of complexity. But they try to solve the problem by reverting to the animal level — by narrowing their focus to some isolated concrete, like the dog reacting to the smell of a bone, while evading all the other concretes to which it is connected in reality. They solve the problem of complexity by throwing out vast amounts of relevant information, thereby reducing themselves to helplessness.

 

The proper, human method is the exact opposite. We need to retain all the data we can — the more facts we can keep in mind in making any decision, the better off we are — but we need to retain all these facts in a form we can deal with. We can’t be expected to read or rattle off to ourselves, before every action, a whole encyclopedia of past human experiences, or even a single volume of tips, rules and practical suggestions. Somehow we must gather and retain a wealth of information, but in a condensed form. This is exactly what is accomplished by the distinctively human faculty, the conceptual faculty — another name for which is “reason.

Concepts are man’s means of condensing information. They are his means of unit-reduction. They are his means of converting the complex into the simple, while nevertheless losing no information in the process.

If I utter the statement “All men are mortal,” for example, none of you has any trouble in understanding and applying it. You know what it means for your own life, you make up wills and buy insurance policies to cover the practical contingencies it involves, and you know that mortality applies to all men, past, present, and future. Here is a tremendous wealth of data — information about an unlimited number of units, stretching across the globe from pre-history into the endless future, wherever there were, are or will be men; and yet you have no trouble retaining this vast scale of information in the form of the few words “All men are mortal.” Do you do it by elimination, by narrowing your focus to only one or two men and brushing the rest aside as too complicated? Do you merely look at yourself and a few friends, then say: “I can’t deal with the others now, life is too complicated, I’ll appoint a subcommittee to worry about the rest”?

On the contrary, the key is precisely that you take all the units involved in “man” — you retain all the countless real-life instances, including the ones you’ve never seen and never will, and you put them together into a single new unit, the term “man,” which integrates the totality. You accomplish this feat by processing your perceptual data — by asking: what do various entities have in common? what is essential to them? what differentiates them from the other things I see? In the process, you grasp that, in contrast to other creatures, men all share a certain kind of consciousness, the faculty of reason. So you set aside all the differences among men — including height, hair color, fingerprints, intelligence — and you reach the idea of a rational being, and then designate this by a single word. The result is a vast complexity turned into a simplicity, into a single unit. Now you have the ability to focus, in one frame of awareness, on all the cases to which it applies. You can know truths about all of them, and because they come under “man,” they are subsumed by the concept.

Against this background, let us look specifically at principles. A principle is a basic generalization. It is a conceptual statement integrating a wealth of information about all kinds of concretes that we otherwise would be helpless to deal with or keep in mind. Yet we are able to do it by reducing this information to a few words or even just a few letters, like “e = mc2.” A principle is man’s major form of using concepts — using them to reduce the complexity facing him while retaining all the information that is essential for successful action.

There are principles in every field of human endeavor, and men rely on them continually. There are principles of physics, of chemistry, of agriculture — even principles of effective public speaking, which take countless experiences of past speakers and the effects they have, positive and negative, with countless different topics, on countless different audiences, and condense it all into brief, intelligible rules to guide future speakers (such as: “motivate your audience” and “give examples”).

In all these fields, principles are not controversial. Reason has been allowed to perform its proper function and has been seen to be indispensable. In these fields, principles are not asked to compete with tea-leaf readings or with divine revelations.

 

But in the field of morality, the situation, tragically, is the opposite. In the realm of the humanities, we are still in the age of pre-reason. As a result, people do not see the need of concepts to decide moral questions. They do not see that the reason we need moral principles is the same reason we need principles in every realm.

A moral principle is a basic conceptual statement enabling us to choose the right course of action. A proper morality takes into account all the real-life choices men must make. It tells us the consequences to expect from the different choices facing us. It organizes all such information for us, by selecting the essentials; it integrates all the data into a handful of basic rules that we can easily keep in mind, deal with and live by — just as a single concept, “man,” integrates all its instances into a single unit.

If you had no concept of “man,” you could not decide whether a new entity you meet is a man or not. If he were a lot taller and blonder than anyone you had seen so far, say, you would stare in confusion — until you decided what is essential to being a man, i.e., until you conceptualized the relevant data. The same applies to evaluating an action. If you have no moral principles telling you which acts are right and which are wrong, or what is essential to judging a given situation and what is irrelevant, how are you to know what to do and what to avoid?

There are two opposite approaches to moral questions: the principled approach vs. the pragmatist approach. The one tries to integrate, the other to disintegrate. The one tries to broaden the data an individual works with, to draw on all the relevant knowledge man has accumulated, to gain a larger vision and context for the answer to the question — which can be achieved only by invoking man’s means of condensing data, concepts. The other tries to narrow the data base, to shrink the subject to the animal level, to reduce the units by staring only at some isolated percepts.

Suppose, for example, I ask: should one rob a bank?

In pattern, the conceptual individual thinks: “A bank is someone’s property.” Here we see from the very outset the broadening of perspective — he is looking for the abstraction a bank falls under, the concept that names its essence in this context: property. And he grasps that in this respect a bank is just like a home or a machine or a book or a pair of shoes: it is a creation that does not grow on trees, but has to be produced by somebody. Which at once opens his mind to a flood of new data — to everything he knows about the source of books, shoes, banks and the rest: that they presuppose knowledge, inventiveness, independent judgment, focus, work. All these observations are integrated and retained in his mind through a simple principle: “Property is a product of human thought and effort.” From which it becomes apparent that, if men wish to live, they must have title to their product, they need the right to keep and use the results of their effort. This — the right to property — is another principle, which condenses and subsumes all our knowledge of the destructive results of depriving men of their property, not only through bank robbery, but through a thousand other methods besides: it covers what happens when men break-and-enter private homes, or raid farms, or establish socialist states, or plagiarize manuscripts or steal hubcaps. By the device of conceptualizing the action of bank-robbing — i.e., reducing it to essentials and bringing it under principles — we know how to evaluate it. We know that if such behavior is condoned or permitted, the principle involved will lead in the long run to destruction.

The pattern is clear. We are confronted by a concrete — bank-robbing — and we deal with it by considering only a relatively few units, the few principles I mentioned. Yet these contain all the information we have ever gathered about the relevant requirements of human life. So we reach an immediate, decisive answer.

 

Now, by contrast, ask a pragmatist mentality: should I rob a bank? — and his first move is not to conceptualize, but to particularize. The immediate question that comes to his mind is: which bank are we talking about? Chase Manhattan? The 42nd St. branch? Let’s not be “vague” and “simplistic” about this. And how much do you propose to steal? he wants to know; a big bank might not even miss $10,000. And who will you use the money for — yourself, AIDS victims, the poor? Now where are we? Having moved in this direction, having disintegrated the question and treated each bank as a unique case, how is he to decide what to do? You know how — precisely the way bank robbers do decide. They ask: can I get away with it? Or, more exactly: do I feel like trying to get away with it today? Once a man abandons principles, once he dismisses as naive generalities such abstract concepts as ownership, property rights, honesty, justice, there is no way to decide concrete cases except by arbitrary feeling — either his own feeling or that of a group with which he identifies. He ends up using the same method of decision as that of the Japanese tariff committee.

Observe the inversion being perpetrated here. The advocate of principles is the man who actually benefits from the vast data bank of life. He is the one who keeps in mind, when making a decision, the intricate network of interrelated factors, including the implications of his actions for countless similar situations. He is the one who truly faces and deals with the complexity of life, yet he is accused of being “simplistic.” On the other hand, the pragmatist, who scoffs at principles — the man who puts on blinders, eliminates most of the relevant data and ends up staring at an isolated case without context or clue, like a newborn baby — he is the one praised for appreciating the complexity of life and for not being “simple-minded.”

If ever I heard a Big Lie, this is it.

The people who reject principles reject the human method of dealing with complexity. But since they don’t have the animal’s means of coping, either, they are left helpless. In the end, they have recourse only to raw feeling or gang warfare. This is how our politicians are now deciding the life-and-death issues of our economy and foreign policy.

If a man lives by principles, his course of action is in essence predictable; you know what to expect of him. But if a man rejects principles, who knows what he will do next?

Observe that all our leading candidates today, Democratic and Republican alike, take detailed stands on every concrete one can imagine; they issue separate position papers filled with clauses and statistics to cover every trouble spot in Washington and the world — yet no one knows what they stand for or what they will do in office. No one can retain all these disintegrated concretes or add them up into a coherent, predictable direction. The candidates offer us an abundance of plans — but there is no connection among their plans, no unifying principles, neither in domestic affairs nor in foreign. Under these conditions, elections become a crapshoot — especially when we remember that our candidates are masters of the pragmatic “flip-flop,” as it is now called. After all, we are told, every concrete situation is unique; what applied yesterday is not necessarily relevant today. The candidates and office-holders themselves do not know what they are going to do or say next. They are not trying to deceive the country by cunningly concealing some devious ulterior motive; they are merely responding to the latest hole in the dike by sticking fingers in at random, i.e., without any principles to give them guidance. Thus: it is an outrage, said one candidate, to capitulate as President Carter did to the vicious Iranian kidnappers — and here is my plan, he said a while later, for shipping the Iranians arms in exchange for hostages. Or: Russia is an “evil empire” that no one should trust, he said — and here is the new arms treaty that I trust them to obey. Or: let’s get rid of some government departments, let’s abolish the Department of Education — and a few years later a new initiative from him, a proposal to create a Department of Veterans Affairs.

Even today’s politicians feel the need to offer the electorate something more inspiring than shifting concretes. Typically, what they do to fill this need is to use abstract words without reference to reality, not as principles but as empty slogans, to be sprinkled through their oratory as garnish, committing them to nothing, yet sounding large and visionary — words like “peace” or “love” or “Americanism” or the “global environment” or the “public good.” The most brazen practitioner of this policy, though certainly not the only one, was Gary Hart, with his periodic invocation of the need for “new ideas” — which no one could find in any of his detailed position papers.

 

If we are to save our country, what we need is not better politicians, but the only thing that can ever produce them: a code of morality. A proper morality is a set of principles derived from reality, principles reducing the vast complexity of human choices to simple, retainable units, telling us which actions support human life and which ones destroy it. Primarily, the code offers guidance to the individual; then, in the social realm, it offers guidance on political questions. A man who acts on moral principles in this sense is neither a martyr nor a zealot nor a prig. He is a man whose actions are guided by man’s distinctive faculty of cognition. For man, principled action is the only successful kind of action. Moral principles are not ends in themselves; they are means to an end. They are not spiritual luxuries reserved for “higher” souls, or duties owed to God or heaven. They are a practical, earthly necessity to anyone concerned with self-preservation.

If moral principles are to function successfully in human life, however, if they are to play their vital role, they must be accepted as absolutes. You cannot be “flexible” about them, or bend them according to your own or your group’s feelings; you cannot compromise them. This is the opposite of the pragmatist philosophy that dominates our culture, so I want to pursue the point. This will make the role of principles in man’s life stand out even more clearly.

Let’s go back to our bank robber and imagine trying to reach a moral compromise with him. You are the banker, say, and your first response is to tell the intruder to stop because the property in question is yours. The robber says: no, I want your money, all of it. At this point, instead of calling in the police or standing on principle, you decide to compromise; you agree — without duress, as your idea of a moral resolution — to give the robber only part of the money he came to steal. That, after all, would show “flexibility” on your part, tolerance, compromise, the willingness to negotiate — all the things we hear everywhere are the good. Do you see what such a policy would mean and lead to? In Ayn Rand’s words, it would mean a “total surrender” — the recognition of the robber’s right to your property. Once you make this kind of concession, you leave yourself helpless: you not only give up some of your property, but also abandon the principle of ownership. The robber, accordingly, gains the upper hand in the relationship and the power to determine its future. He gains the inestimable advantage of being sanctioned as virtuous. What he concedes in the compromise is merely a concrete (he forgoes some of the loot) — temporarily; temporarily, because now there is no way you can stop him when he comes back with a new demand tomorrow.

The same kind of analysis applies to every case of moral compromise. Imagine, for example, a country with the means to defend itself — e.g., Britain or France in the ’30s — which capitulates, in the name of being “flexible,” to some of the arbitrary demands of an aggressor, such as Hitler. That kind of country thereby invites more demands — to be answered by more “flexibility.” Such a country is doomed from the start (until and unless it changes its fundamental policy). By conceding the propriety of “some” aggression, it has dropped the principle of self-defense and of its own sovereignty, which leaves it without moral grounds to object to the next depredation.

Or suppose you accept the “moderate” idea that individual rights are not absolute and may be overridden by government controls “when the public good requires it” — when the public needs more welfare payments or more Medicare or more censorship of obscenity. In this case, you have agreed with the collectivists that individual rights are not inalienable; that the public good comes above them; that man exercises certain prerogatives not by right, but by the permission of society, as represented by the government. If so, the principle of individual rights has been entirely repudiated by you — in favor of the principle of statism. In other words, in the name of achieving a “compromise” between clashing systems, the essence of one, capitalism, has simply been thrown out, while the essence of the other, socialism, has become the ruling absolute.

Or consider a judge who tries not to be too “extremist” in regard to justice; he decides to “modify” justice by a dose of political favoritism under pressure from the bosses of the local clubhouse. He has thereby dropped the principle of justice. Justice cannot countenance a single act of injustice. What sets the terms of this judge’s compromise, therefore, and decides his verdicts is the principle of favoritism, which permits whatever whims the bosses authorize, including even many verdicts that are not tainted, when this is politically palatable to the bosses. In such a court, a fair verdict is possible, but only by accident. The essence of the system, and its ultimate result, is the elimination of fairness in favor of pull.

 

Either you accept a proper principle — whether individual rights, self-defense, justice or any other — as an absolute, or not at all.

There is no “no-man’s land” between opposite principles, no “middle of the road” which is untouched by either or shaped equally by both. The fact is that man cannot escape the rule of some kind of principles; as a conceptual being, he cannot act without the guidance of some fundamental integrations. And just as, in economics, bad money drives out good, so, in morality, bad principles drive out good. To try to combine a rational principle with its antithesis is to eliminate the rational as your guide and establish the irrational. If, like Faust, you try to make a deal with the devil, then you lose to him completely. “In any compromise between food and poison,” Ayn Rand observes, “it is only death that can win. In any compromise between good and evil, it is only evil that can profit.”

The reason for this is not that evil is more powerful than good. On the contrary, the reason is that evil is powerless and, therefore, can exist only as a parasite on the good.

The good is the rational; it is that which conforms to the demands of reality and thereby fosters man’s life, along with all the values life requires. Such a policy acquires no advantages whatever from its antithesis. To continue our examples: a banker does not need the help of a robber who is trying to loot him. Nor does a free country need the attacks of an aggressor. Nor does an individual seeking to sustain himself need the jails of a dictator. Nor does the administration of justice benefit from subversion by corrupt bosses. By its very nature, the good can only lose by trafficking with the evil.

The evil is in exactly the opposite position. The evil is the irrational; it is that which contradicts the facts of reality and thereby threatens man’s life. Such a policy cannot be upheld as an absolute or practiced consistently — not if one wishes to avoid immediate destruction. Evil has to count on some element of good; it can exist only as a parasite, only as an exception to the virtues on which it is relying. “The irrational,” in Ayn Rand’s words, “has everything to gain from the rational: a share of its achievements and values.” A producer does not need a robber, but a robber does need the producer on whom he preys. And so do robber-nations need freer countries — which they seek not to annihilate, but to rule and loot. And no collectivists, not even the Nazis or the Communists, want to throttle every act of individual self-assertion; they need men to think and act as individuals to some extent, or their own regimes would collapse. And no political boss seeks to reverse every proper verdict; the boss mentality counts on the appearance of justice, so that men will respect and obey the courts, so that then, when he wishes it, the boss can intervene behind the scenes and cash in on that respect.

Evil is not consistent and does not want to be consistent. What it wants is to get away with injecting itself into the life-sustaining process sometimes — short-range, out-of-context, by arbitrary whim. To achieve this goal, all that it needs is a single concession by the good: a concession of the principle involved, a concession that evil is proper “sometimes.” Such a compromise is evil’s charter of liberty. Thereafter, the irrational is free to set the terms and to spread by further whim, until the good — and man — is destroyed.

The power of the good is enormous, but depends on its consistency. That is why the good has to be an issue of “all or nothing,” “black or white,” and why evil has to be partial, occasional, “gray.” Observe that a “liar” in common parlance is not a man who always, conscientiously, tells falsehoods; there is no such creature; for the term to apply to you, a few venal whoppers on your part are enough. Just as a “hypocrite” is not a man who scrupulously betrays every one of his own ideas. Just as a “burglar” is not a man who steals from everybody he meets. Just as a person is a “killer” if he respects human life 99.9 percent of the time and hires himself out to the Mafia as an executioner only now and then. The same applies to every kind of corruption. To be evil “only sometimes” is to be evil. To be good is to be good all of the time, i.e., as a matter of consistent, rational principle.

This is why Objectivism is absolutist and why we condemn today’s cult of compromise. These cultists would achieve the same end-result more honestly by telling men without equivocation to eschew the good and practice the evil. Evil is delighted to “compromise” — for it, such a deal is total victory, the only kind of victory it can ever achieve: the victory of plundering, subverting and ultimately destroying the good.

Why should one act on principle? My answer is: in the end, men cannot avoid it — some principle always wins. If the right principles, the rational ones, are not conscious, explicit absolutes in men’s minds, then their evil opposites take over by default and ultimately win out. That is why, in our pragmatist, unprincipled age, all the wrong principles are winning. That is why every form of irrationality, cowardice, injustice and tyranny is sweeping the world.

It is not enough, therefore, merely to act “on principle.” Man needs to act consciously on rational principles, principles based on the facts of reality, principles that promote and sustain human life. If you accept irrational principles, such as religious dogmas or mystical commandments, you will find that you can’t live by them consistently, precisely because they are irrational and clash with reality, and you will be driven to pragmatism in despair as your only alternative.

For example, if your moral principle is self-sacrifice, you can’t expect to follow it consistently, as an absolute — not if you want to stay alive. Remember that a principle integrates countless concretes. If you tried to practice as a principle the injunction to give up — to give up your values for the sake of God or of others — think what such a course would demand. Give up your property — others need it. Give up your pursuit of happiness — you are not on earth to gratify selfish desires. Give up your convictions — who are you to think you know the truth when God or society, who is your master, thinks otherwise? Give up your choice of personal friends — you are supposed to love everybody, above all your enemies; that, after all, is an act of real sacrifice. Give up your self-defense — you are supposed to turn the other cheek when Russia takes over Nicaragua — or Florida. Even if you decide to renounce everything — to become like the medieval saints, mortify the flesh, drink laundry water, sleep on a rock for a pillow — so long as you are motivated by any personal quest, even if it is only for joy in heaven, you are still condemned as selfish. Who could obey such a code? Who could follow, day after day, in all the concrete situations of life, such a rule? No one could, and no one ever did. Yet that is what would be meant by accepting self-sacrifice as virtue, i.e., as a moral principle.

What then have men done in the face of such an inverted moral code? Instead of running from it in horror and proclaiming an ethics of rational self-interest, they accept the creed of self-sacrifice — but quickly add that, of course, there are no absolutes and one has to compromise and be “moderate” in order to survive. In other words, they preach irrational principles, then half-practice, half-evade them. No wonder they are filled with terror at the prospect of acting on principle.

If you hold irrational principles, your principles become a threat to your life, and then compromise and pragmatism become unavoidable. But that too is no answer; it is merely another threat to your life.

The only solution is a code of rational principles — a logical, scientific approach to morality — an ethics based on reality, not on supernatural fantasy or on social convention.

This leads us to the base of philosophy, metaphysics, on which ethics itself depends — and to the principle that underlies all other principles. I mean the principle that there is a reality, that it is what it is, that it exists independent of man, and therefore that we must recognize the facts of reality, like them or not, and live accordingly. This is the fundamental which any rational approach to ethics presupposes. Morality consists of absolutes only because it is based on facts which are absolute.

On the other hand, if a man says that there is no reality — or that reality is anything he or society wants it to be — then there are no moral principles, either, and no need of any. In this kind of setup, all he has to do is assert his arbitrary wishes — no matter how bizarre or contradictory — and the world will fall into line. This is the actual foundation of the pragmatist viewpoint. Pragmatism as a philosophy does not start by attacking moral principles; it starts by denying reality; it rejects the very idea of an external world to which man must adhere. Then it concludes: anything goes — there are no absolutes — there’s nothing to stand in our way anymore.

 

Am I exaggerating here? Last month, I was speaking at a convention of philosophers in Oregon. The man who spoke before me on the program was a philosopher who had moved a few years before to Washington, D.C., to work for the National Endowment for the Humanities. At one point in his talk, he explained to the audience what he called, ironically, the “metaphysical lesson” he had learned from dealing with Congress. The people he met in the halls of Congress, he began, often wore buttons announcing this lesson explicitly. The buttons read: “Reality is negotiable.”

When he first went to Washington, he said, he had thought that people began the legislative process by studying the facts of a given problem, the data which were an indisputable given and had to be accepted. He had thought that politicians debated which policy was appropriate on the basis of these facts. What he observed, however, was that congressmen would come to the bargaining table with their policy decisions long since made, and then rewrite any unpleasant facts to make them fit in with these decisions. For example, if a Republican objected that a new social program would increase the budget deficit, the Democratic aides would be sent off to redo the projections for next year’s tax revenues; they would jack up the expected GNP or project a new rate of interest or come up with some other prediction which would ensure that, in their new calculations, everything would come out as they wanted and no budget deficit would result. The Republicans accepted this approach and operated by the same method.

But what about the real numbers, you ask — the real predictions, the real facts? Who knows and who cares? you would be answered. “Reality is negotiable.”

These buttons are supposed to be an “in” joke. But the joke is that they are no joke: the wearers learned the message they are flaunting in all their Ivy League schools, and they believe it — a fact proved by their actions, which are not merely concrete-bound, but militantly so. Their actions, as we may put it, are not merely unprincipled, but unprincipled on principle.

How do you fight a mentality like this and prevent it from leading you to disaster? You need to begin on the deepest level; you need more than a code of ethics. You need a philosophy that recognizes and upholds reason, a philosophy built on the fact that facts are not negotiable — that what is, is.

In one sense, “What is, is” is the most complicated statement you can utter; it pertains not just to every man, dog or star, but to everything, everything that is, was or ever will be. It gives us, in effect, the results of a tour of the entire universe — in the form of three brief words, which, if you understand and accept them, fix in your mind and make available to you for the rest of your life the essential nature of existence. That is the most eloquent example there is of our conceptual faculty at work, expanding incalculably the range and power of our minds, reducing complexity to simplicity by the power of principle — in this case, metaphysical principle. Nothing less can give men the means to live in the world successfully or the foundation to act on moral principle.

 

Why should one act on principle? The deepest and final answer is: for the same reason one should jump out of the path of a speeding truck — because if one doesn’t, one will be squashed by an unforgiving nemesis: an absolute reality.

The Analytic-Synthetic Dichotomy

This essay was first published in the May – September 1967 issues of The Objectivist and later anthologized in Introduction to Objectivist Epistemology (1990).

Introduction

Some years ago, I was defending capitalism in a discussion with a prominent professor of philosophy. In answer to his charge that capitalism leads to coercive monopolies, I explained that such monopolies are caused by government intervention in the economy and are logically impossible under capitalism. (For a discussion of this issue, see Capitalism: The Unknown Ideal.) The professor was singularly unmoved by my argument, replying, with a show of surprise and disdain:

Logically impossible? Of course — granted your definitions. You’re merely saying that, no matter what proportion of the market it controls, you won’t call a business a ‘coercive monopoly’ if it occurs in a system you call ‘capitalism.’ Your view is true by arbitrary fiat, it’s a matter of semantics, it’s logically true but not factually true. Leave logic aside now; be serious and consider the actual empirical facts on this matter.”

To the philosophically uninitiated, this response will be baffling. Yet they meet its equivalents everywhere today. The tenets underlying it permeate our intellectual atmosphere like the germs of an epistemological black plague waiting to infect and cut down any idea that claims the support of conclusive logical argumentation, a plague that spreads subjectivism and conceptual devastation in its wake.

This plague is a formal theory in technical philosophy; it is called: the analytic-synthetic dichotomy. It is accepted, in some form, by virtually every influential contemporary philosopher — pragmatist, logical positivist, analyst and existentialist alike.

The theory of the analytic-synthetic dichotomy penetrates every corner of our culture, reaching, directly or indirectly, into every human life, issue and concern. Its carriers are many, its forms subtly diverse, its basic causes complex and hidden — and its early symptoms prosaic and seemingly benign. But it is deadly.

The comparison to a plague is not, however, fully exact. A plague attacks man’s body, not his conceptual faculty. And it is not launched by the profession paid to protect men from it.

Today, each man must be his own intellectual protector. In whatever guise the theory of the analytic-synthetic dichotomy confronts him, he must be able to detect it, to understand it, and to answer it. Only thus can he withstand the onslaught and remain epistemologically untouched.

The theory in question is not a philosophical primary; one’s position on it, whether it be agreement or opposition, derives in substantial part, from one’s view of the nature of concepts. The Objectivist theory of concepts is presented above, in Ayn Rand’s Introduction to Objectivist Epistemology. In the present discussion, I shall build on this foundation. I shall summarize the theory of the analytic-synthetic dichotomy as it would be expounded by its contemporary advocates, and then answer it point by point.

The theory was originated, by implication, in the ancient world, with the views of Pythagoras and Plato, but it achieved real prominence and enduring influence only after its advocacy by such modern philosophers as Hobbes, Leibniz, Hume and Kant. (The theory was given its present name by Kant.) In its dominant contemporary form, the theory states that there is a fundamental cleavage in human knowledge, which divides propositions or truths into mutually exclusive (and jointly exhaustive) types. These types differ, it is claimed, in their origins, their referents, their cognitive status, and the means by which they are validated. In particular, four central points of difference are alleged to distinguish the two types.

(a) Consider the following pairs of true propositions:

i) A man is a rational animal.

ii) A man has only two eyes.

i) Ice is a solid.

ii) Ice floats on water.

i) 2 plus 2 equals 4.

ii) 2 qts. of water mixed with 2 qts. of ethyl alcohol yield 3.86 qts. of liquid, at 15.56°C.

The first proposition in each of these pairs, it is said, can be validated merely by an analysis of the meaning of its constituent concepts (thus, these are called “analytic” truths). If one merely specifies the definitions of the relevant concepts in any of these propositions, and then applies the laws of logic, one can see that the truth of the proposition follows directly, and that to deny it would be to endorse a logical contradiction. Hence, these are also called “logical truths,” meaning that they can be validated merely by correctly applying the laws of logic.

Thus, if one were to declare that “A man is not a rational animal,” or that “2 plus 2 does not equal 4,” one would be maintaining by implication that “A rational animal is not a rational animal,” or that “1 plus 1 plus 1 plus 1, does not equal 1 plus 1 plus 1 plus 1” — both of which are self-contradictory. (The illustration presupposes that “rational animal” is the definition of “man.”) A similar type of self-contradiction would occur if one denied that “Ice is a solid.”

Analytic truths represent concrete instances of the Law of Identity; as such, they are also frequently called “tautologies” (which, etymologically, means that the proposition repeats “the same thing”; e.g., “A rational animal is a rational animal,” “The solid form of water is a solid”). Since all of the propositions of logic and mathematics can ultimately be analyzed and validated in this fashion, these two subjects, it is claimed, fall entirely within the “analytic” or “tautological” half of human knowledge.

Synthetic propositions, on the other hand — illustrated by the second proposition in each of the above pairs, and by most of the statements of daily life and of the sciences — are said to be entirely different on all these counts. A “synthetic” proposition is defined as one which cannot be validated merely by an analysis of the meanings or definitions of its constituent concepts. For instance, conceptual or definitional analysis alone, it is claimed, could not tell one whether ice floats on water, or what volume of liquid results when various quantities of water and ethyl alcohol are mixed.

In this type of case, said Kant, the predicate of the proposition (e.g. “floats on water”) states something about the subject (“ice”) which is not already contained in the meaning of the subject-concept. (The proposition represents a synthesis of the subject with a new predicate, hence the name.) Such truths cannot be validated merely by correctly applying the laws of logic; they do not represent concrete instances of the Law of Identity. To deny such truths is to maintain a falsehood, but not a self-contradiction. Thus, it is false to assert that “A man has three eyes,” or that “Ice sinks in water” — but, it is said, these assertions are not self-contradictory. It is the facts of the case, not the laws of logic, which condemn such statements. Accordingly, synthetic truths are held to be “factual,” as opposed to “logical” or “tautological” in character.

(b) Analytic truths are necessary; no matter what region of space or what period of time one considers, such propositions must hold true. Indeed, they are said to be true not only throughout the universe which actually exists, but in “all possible worlds” — to use Leibniz’s famous phrase. Since its denial is self-contradictory, the opposite of any analytic truth is unimaginable and inconceivable. A visitor from an alien planet might relate many unexpected marvels, but his claims would be rejected out-of-hand if he announced that in his world, ice was a gas, man was a postage stamp, and 2 plus 2 equaled 7.3.

Synthetic truths, however, are declared not to be necessary; they are called “contingent.” This means: As a matter of fact, in the actual world that men now observe, such propositions happen to be true — but they do not have to be true. They are not true in “all possible worlds.” Since its denial is not self-contradictory, the opposite of any synthetic truth is at least imaginable or conceivable. It is imaginable or conceivable that men should have an extra eye (or a baker’s dozen of such eyes) in the back of their heads, or that ice should sink in water like a stone, etc. These things do not occur in our experience but, it is claimed, there is not logical necessity about this. The facts stated by synthetic truths are “brute” facts, which no amount of logic can make fully intelligible.

Can one conclusively prove a synthetic proposition? Can one ever be logically certain of its truth? The answer given is: “No. As a matter of logic, no synthetic proposition ‘has to be’ true; the opposite of any is conceivable.” (The most uncompromising advocates of the analytic-synthetic dichotomy continue: “You cannot even be certain of the direct evidence of your senses — for instance, that you now see a patch of red before you. In classifying what you see as ‘red,’ you are implicitly declaring that it is similar in color to certain of your past experiences — and how do you know that you have remembered these latter correctly? That man’s memory is reliable, is not a tautology; the opposite is conceivable.”) Thus, the most one can ever claim for synthetic, contingent truths is some measure of probability; they are more-or-less-likely hypotheses.

(c) Since analytic propositions are “logically” true, they can, it is claimed, be validated independently of experience; they are “non-empirical” or “a priori” (today, these terms mean: “independent of experience”). Modern philosophers grant that some experience is required to enable a man to form concepts; their point is that, once the appropriate concepts have been formed (e.g., “ice,” “solid,” “water,” etc.), no further experience is required to validate their combination into an analytically true proposition (e.g., “Ice is solid water”). The proposition follows simply from an analysis of definitions.

Synthetic truths, on the other hand, are said to be dependent upon experience for their validation; they are “empirical” or “a posteriori.” Since they are “factual,” one can discover their truth initially only by observing the appropriate facts directly or indirectly; and since they are “contingent,” one can find out whether yesterday’s synthetic truths are still holding today, only by scrutinizing the latest empirical data.

(d) Now we reach the climax: the characteristically twentieth-century explanation of the foregoing differences. It is: Analytic propositions provide no information about reality, they do not describe facts, they are “non-ontological” (i.e., do not pertain to reality). Analytic truths, it is held, are created and sustained by men’s arbitrary decision to use words (or concepts) in a certain fashion, they merely record the implications of linguistic (or conceptual) conventions. This, it is claimed, is what accounts for the characteristics of analytic truths. They are non-empirical — because they say nothing about the world of experience. No fact can ever cast doubt upon them, they are immune from future correction — because they are immune from reality. They are necessary — because men make them so.

“The propositions of logic,” said Wittgenstein in the Tractatus, “all say the same thing: that is, nothing.” “The principles of logic and mathematics,” said A. J. Ayer in Language, Truth and Logic, “are true universally simply because we never allow them to be anything else.”

Synthetic propositions, on the other hand, are factual — and for this, man pays a price. The price is that they are contingent, uncertain and unprovable.

The theory of the analytic-synthetic dichotomy presents men with the following choice: If your statement is proved, it says nothing about that which exists; if it is about existents, it cannot be proved. If it is demonstrated by logical argument, it represents a subjective convention; if it asserts argument, it represents a subjective convention; if it asserts a fact, logic cannot establish it. If you validate it by an appeal to the meaning of your concepts, then it is cut off from reality; if you validate it by an appeal to your percepts, then you cannot be certain of it.

Objectivism rejects the theory of the analytic-synthetic dichotomy as false — in principle, in root, and in every one of its variants.

Now, let us analyze and answer this theory point by point.

“Analytic” and “Synthetic” Truths

An analytic proposition is defined as one which can be validated merely by an analysis of the meaning of its constituent concepts. The critical question is: What is included in “the meaning of a concept”? Does a concept mean the existents which it subsumes, including all their characteristics? Or does it mean only certain aspects of these existents, designating some of their characteristics but excluding others?

The latter viewpoint is fundamental to every version of the analytic-synthetic dichotomy. The advocates of this dichotomy divide the characteristics of the existents subsumed under a concept into two groups: those which are included in the meaning of the concept, and those — the great majority — which, they claim, are excluded from its meaning. The dichotomy among propositions follows directly. If a proposition links the “included” characteristics with the concept it can be validated merely by an “analysis” of the concept; if it links the “excluded” characteristics with the concept, it represents an act of “synthesis.”

For example: it is commonly held that, out of the vast number of man’s characteristics (anatomical, physiological, psychological, etc.), two — “rationality” and “animality” — constitute the entire meaning of the concept “man.” All the rest, it is held, are outside the concepts meaning. On this view, it is “analytic” to state that “A man is a rational animal” (the predicate is “included” in the subject-concept), but “synthetic” to state that “A man has only two eyes” (the predicate is “excluded”).

The primary historical source of the theory that a concept includes some of an entity’s characteristics but excludes others, is the Platonic realist theory of universals. Platonism holds that concepts designate non-material essences (universals) subsisting in a supernatural dimension. Our world, Plato claimed, is only the reflection of these essences, in a material form. On this view, a physical entity possesses two very different types of characteristics: those which reflect its supernatural essence, and those which arise from the fact that, in this world, the essence is manifest in material form. The first are “essential” to the entity and constitute its real nature; the second are matter-generated “accidents.” Since concepts are said to designate essences, the concept of an entity includes its “essential” characteristics, but excludes its “accidents.”

How does one differentiate “accidents” from “essential” characteristics in a particular case? The Platonists’ ultimate answer is: By an act of “intuition.”

(A more plausible and naturalistic variant of the essence-accident dichotomy is endorsed by Aristotelians; on this point, their theory of concepts reflects a strong Platonic influence.)

In the modern era, Platonic realism lost favor among philosophers; nominalism progressively became the dominant theory of concepts. The nominalists reject supernaturalism as unscientific, and the appeal to “intuition” as a thinly veiled subjectivism. They do not, however, reject the crucial consequence of Plato’s theory: the division of an entity’s characteristics into two groups, one of which is excluded from the concept of designating the entity.

Denying that concepts have an objective basis in the facts of reality, nominalists declare that the source of concepts is a subjective human decision: men arbitrarily select certain characteristics to serve as the basis (the “essentials”) for a classification; thereafter, they agree to apply the same term to any concretes that happen to exhibit these “essentials,” no matter how diverse these concretes are in other respects. On this view, the concept (the term) means only those characteristics initially decreed to be “essential.” The other characteristics of the subsumed concretes bear no necessary connection to the “essential” characteristics, and are excluded from the concept’s meaning.

Observe that, while condemning Plato’s mystic view of a concept’s meaning, the nominalists embrace the same view in a skeptic version. Condemning the essence-accident dichotomy as implicitly arbitrary, they institute an explicitly arbitrary equivalent. Condemning Plato’s “intuitive” selection of essences as a disguised subjectivism, they spurn the disguise and adopt subjectivism as their official theory — as though a concealed vice were heinous, but a brazenly flaunted one, rational. Condemning Plato’s supernaturally determined essences, they declare that essences are socially determined, thus transferring to the province of human whim what had once been the prerogative of Plato’s divine realm. The nominalists’ “advance” over Plato consisted of secularizing his theory. To secularize an error is still to commit it.

Its form, however, changes. Nominalists do not say that a concept designates only an entity’s “essence,” excluding its “accidents.” Their secularized version is: A concept is only a shorthand tag for the characteristics stated in its definition; a concept and its definition are interchangeable; a concept means only its definition.

It is the Platonic-nominalist approach to concept-formation, expressed in such views as these, that gives rise to the theory of the analytic-synthetic dichotomy. Yet its advocates commonly advance the dichotomy as a self-contained primary, independent of any particular theory of concepts. Indeed, they usually insist that the issue of concept-formation — since it is “empirical,” not “logical” — is outside the province of philosophy. (!) (Thus, they use the dichotomy to discredit in advance any inquiry into the issues on which the dichotomy itself depends.)

In spite of this, however, they continue to advocate “conceptual analysis,” and to distinguish which truths can — or cannot — be validated by its practice. One is expected to analyze concepts, without a knowledge of their source and nature — to determine their meaning, while ignorant of their relationship to concretes. How? The answer implicit in contemporary philosophical practice is: “Since people have already given concepts their meanings, we need only study common usage.” In other words, paraphrasing Galt: “The concepts are here. How did they get here? Somehow.” (Atlas Shrugged)

Since concepts are complex products of man’s consciousness, any theory or approach which implies that they are irreducible primaries is invalidated by that fact alone. Without a theory of concepts as a foundation, one cannot, in reason, adopt any theory about the nature or kinds of propositions; propositions are only combinations of concepts.

The Objectivist theory of concepts undercuts the theory of the analytic-synthetic dichotomy at its root.

According to Objectivism, concepts “represent classifications of observed existents according to their relationships to other observed existents.” (Ayn Rand, Introduction to Objectivist Epistemology; all further quotations in this section, unless otherwise identified, are from this work.) To form a concept, one mentally isolates a group of concretes (of distinct perceptual units), on the basis of observed similarities which distinguish them from all other known concretes (similarity is “the relationship between two or more existents which possess the same characteristic(s), but in different measure or degree”); then, by a process of omitting the particular measurements of these concretes, one integrates them into a single new mental unit: the concept, which subsumes all concretes of this kind (a potentially unlimited number). The integration is completed and retained by the selection of a perceptual symbol (a word) to designate it. “A concept is a mental integration of two or more units possessing the same distinguishing characteristic(s), with their particular measurements omitted.”

By isolating and integrating perceived concretes, by reducing the number of mental units with which he has to deal, man is able to break up and organize his perceptual field, to engage in a specialized study, and to retain an unlimited amount of information pertaining to an unlimited number of concretes. Conceptualization is a method of acquiring and retaining knowledge of that which exists, on a scale inaccessible to the perceptual level of consciousness.

Since a word is a symbol for a concept, it has no meaning apart from the content of the concept it symbolizes. And since a concept is an integration of units, it has no content or meaning apart from its units. The meaning of a concept consists of the units — the existents — which it integrates, including all the characteristics of these units.

Observe that concepts mean existents, not arbitrarily selected portions of existents. There is no basis whatever — neither metaphysical nor epistemological, neither in the nature of reality nor of a conceptual consciousness — for a division of the characteristics of a concept’s units into two groups, one of which is excluded from the concept’s meaning.

Metaphysically, an entity is: all of the things which it is. Each of its characteristics has the same metaphysical status: each constitutes a part of the entity’s identity.

Epistemologically, all the characteristics of the entities subsumed under a concept are discovered by the same basic method: by observation of these entities. The initial similarities, on the basis of which certain concretes were isolated and conceptually integrated, were grasped by a process of observation; all subsequently discovered characteristics of these concretes are discovered by the same method (no matter how complex the inductive procedures involved may become).

The fact that certain characteristics are, at a given time, unknown to man, does not indicate that these characteristics are excluded from the entity — or from the concept. A is A; existents are what they are, independent of the state of human knowledge; and a concept means the existents which it integrates. Thus, a concept subsumes and includes all the characteristics of its referents, known and not-yet-known.

(This does not mean that man is omniscient, or that he can capriciously ascribe any characteristics he chooses to the referents of his concepts. In order to discover that an entity possesses a certain characteristic, one must engage in a process of scientific study, observation and validation. Only then does one know that that characteristic is true of the entity and, therefore, is subsumed under the concept.)

“It is crucially important to grasp the fact that a concept is an ‘open-end’ classification which includes the yet-to-be-discovered characteristics of a given group of existents. All of man’s knowledge rests on that fact.

“The pattern is as follows: When a child grasps the concept ‘man,’ the knowledge represented by that concept in his mind consists of perceptual data, such as man’s visual appearance, the sound of his voice, etc. When the child learns to differentiate between living entities and inanimate matter, he ascribes a new characteristic, ‘living,’ to the entity he designates as ‘man.’ When the child learns to differentiate among various types of consciousness, he includes a new characteristic in his concept of man, ‘rational’ — and so on. The implicit principle guiding this process, is: ‘I know that there exists such an entity as man; I know many of his characteristics, but he has many others which I do not know and must discover.’ The same principle directs the study of every other kind of perceptually isolated and conceptualized existents.

“The same principle directs the accumulation and transmission of mankind’s knowledge. From a savage’s knowledge of man . . . [to the present level], the concept ‘man’ has not changed: it refers to the same kind of entities. What has changed and grown is the knowledge of these entities.”

What, then, is the meaning of the concept “man”? “Man” means a certain type of entity, a rational animal, including all the characteristics of this entity (anatomical, physiological, psychological, etc., as well as the relations of these characteristics to those other entities) — all the characteristics already known, and all those ever to be discovered. Whatever is true of the entity, is meant by the concept.

It follows that there are no grounds on which to distinguish “analytic” from “synthetic” propositions. Whether one states that “A man is a rational animal,” or that “A man has only two eyes” — in both cases, the predicated characteristics are true of man and are, therefore, included in the concept “man.” The meaning of the first statement is: “A certain type of entity, including all its characteristics (among which are rationality and animality) is: a rational animal.” The meaning of the second is: “A certain type of entity, including all its characteristics (among which is the possession of only two eyes) has: only two eyes.” Each of these statements is an instance of the Law of Identity; each is a “tautology”; to deny either is to contradict the meaning of the concept “man,” and thus to endorse a self-contradiction.

A similar type of analysis is applicable to every true statement. Every truth about a given existent(s) reduces, in basic pattern, to: “X is: one or more of the things which it is.” The predicate in such a case states some characteristic(s) of the subject; but since it is a characteristic of the subject, the concept(s) designating the subject in fact includes the predicate from the outset. If one wishes to use the term “tautology” in this context, then all truths are “tautological.” (And, by the same reasoning, all falsehoods are self-contradictions.)

When making a statement about an existent, one has, ultimately, only two alternatives: “X (which means X, the existent, including all its characteristics) is what it is” — or: “X is not what it is.” The choice between truth and falsehood is the choice between “tautology” (in the sense explained) and self-contradiction.

In the realm of propositions, there is only one basic epistemological distinction: truth vs. falsehood, and only one fundamental issue: By what method is truth discovered and validated? To plant a dichotomy at the base of human knowledge — to claim that there are opposite methods of validation and opposite types of truth — is a procedure without grounds for justification.

In one sense, no truths are “analytic.” No proposition can be validated merely by “conceptual analysis”; the content of the concept — i.e., the characteristics of the existents it integrates — must be discovered and validated by observation, before any “analysis” is possible. In another sense, all truths are “analytic.” When some characteristic of an entity has been discovered, the proposition ascribing it to the entity will be seen to be “logically true” (its opposite would contradict the meaning of the concept designating the entity). In either case, the analytic-logical-tautological vs. synthetic-factual dichotomy collapses.

To justify their view that some of an entity’s characteristics are excluded from the concept designating it, both Platonists and nominalists appeal to the distinction between the “essential” and the “non-essential” characteristics of an entity. For the Platonists, this distinction represents a metaphysical division, intrinsic to the entity, independent of man and of man’s knowledge. For the nominalists, it represents a subjective human decree, independent of the facts of reality. For both schools, whatever their terminological or other differences, a concept means only the essential (or defining) characteristic of its units.

Neither school provides an objective basis for the distinction between an entity’s “essential” and “non-essential” characteristics. (Supernaturalism — in its avowed or secularized form — is not an objective basis for anything.) Neither school explains why such a distinction is objectively required in the process of conceptualization.

This explanation is provided by Objectivism, and exposes the basic error in the Platonic-nominalist position.

When a man reaches a certain level of conceptual complexity, he needs to discover a method of organizing and interrelating his concepts; he needs a method that will enable him to keep each of his concepts clearly distinguished from all the others, each connected to a specific group of existents clearly distinguished from the other existents he knows. (In the early stages of conceptual development, when a child’s concepts are comparatively few in number and designate directly perceivable concretes, “ostensive definitions” are sufficient for this purpose.) The method consists of defining each concept, by specifying the characteristic(s) of its units upon which the greatest number of their other known characteristics depends, and which distinguishes the units from all other known existents. The characteristic(s) which fulfills this requirement is designated the “essential” characteristic, in that context of knowledge.

Essential characteristics are determined contextually. The characteristic(s) which most fundamentally distinguishes a certain type of entity from all other existents known at the time, may not do so within a wider field of knowledge, when more existents become known and/or more of the entity’s characteristics are discovered. The characteristic(s) designated as “essential” — and the definition which expresses it — may alter as one’s cognitive context expands. Thus, essences are not intrinsic to entities, in the Platonic (or Aristotelian) manner; they are epistemological, not metaphysical. A definition in terms of essential characteristics “is a device of man’s method of cognition — a means of classifying, condensing and integrating an ever-growing body of knowledge.”

Nor is the designation of essential characteristics a matter of arbitrary choice or subjective decree. A contextual definition can be formulated only after one has fully considered all the known facts pertaining to the units in question: their similarities, their differences from other existents, the causal relationships among their characteristics, etc. This knowledge determines which characteristic(s) is objectively essential — and, therefore, which definition is objectively correct — in a given cognitive context. Although the definition explicitly mentions only the essential characteristic(s), it implies and condenses all of this knowledge.

On the objective, contextual view of essences, a concept does not mean only the essential or defining characteristics of its units. To designate a certain characteristic as “essential” or “defining” is to select, from the total content of the concept, the characteristic that best condenses and differentiates that content in a specific cognitive context. Such a selection presupposes the relationship between the concept and its units: it presupposes that the concept is an integration of units, and that its content consists of its units, including all their characteristics. It is only because of this fact that the same concept can receive varying definitions in varying cognitive contexts.

When “rational animal” is selected as the definition of “man,” this does not mean that the concept “man” becomes a shorthand tag for “anything whatever that has rationality and animality.” It does not mean that the concept “man” is interchangeable with the phrase “rational animal,” and that all of man’s other characteristics are excluded from the concept. It means: A certain type of entity, including all its characteristics, is, in the present context of knowledge, most fundamentally distinguished from all other entities by the fact that it is a rational animal. All the presently available knowledge of man’s other characteristics is required to validate this definition, and is implied by it. All these other characteristics remain part of the content of the concept “man.”

The nominalist view that a concept is merely a shorthand tag for its definition, represents a profound failure to grasp the function of a definition in the process of concept formation. The penalty for this failure is that the process of definition, in the hands of the nominalists, achieves the exact opposite of its actual purpose. The purpose of a definition is to keep a concept distinct from all others, to keep it connected to a specific group of existents. On the nominalist view, it is precisely this connection that is severed: as soon as a concept is defined, it ceases to designate existents, and designates instead only the defining characteristic.

And further: On a rational view of definitions, a definition organizes and condenses — and thus helps one to retain — a wealth of knowledge about the characteristics of a concept’s units. On the nominalist view, it is precisely this knowledge that is discarded when one defines a concept: as soon as a defining characteristic is chosen, all the other characteristics of the units are banished from the concept, which shrivels to mean merely the definition. For instance, as long as a child’s concept of “man” is retained ostensively, the child knows that man has a head, two eyes, two arms, etc.; on the nominalist view, as soon as the child defines “man,” he discards all this knowledge; thereafter, “man” means to him only: “a thing with rationality and animality.”

On the nominalist view, the process of defining a concept is a process of cutting the concept off from its referents, and of systematically evading what one knows about their characteristics. Definition, the very tool which is designed to promote conceptual integration, becomes an agent of its destruction, a means of disintegration.

The advocates of the view that a concept means its definition, cannot escape the knowledge that people actually use concepts to designate existents. (When a woman says: “I married a wonderful man,” it is clear to most philosophers that she does not mean: “I married a wonderful combination of rationality and animality.”) Having severed the connection between a concept and its referents, such philosophers sense that somehow this connection nevertheless exists and is important. To account for it, they appeal to a theory which goes back many centuries and is now commonly regarded as uncontroversial: the theory that a concept has two kinds or dimensions of meaning. Traditionally, these are referred to as a concept’s “extension” (or “denotation”) and its “intension” (or “connotation”).

By the “extension” of a concept, the theory’s advocates mean the concretes subsumed under that concept. By the “intension” of a concept, they mean those characteristics of the concretes which are stated in the concept’s definition. (Today, this is commonly called the “conventional” intension; the distinction among various types of intension, however, merely compounds the errors of the theory, and is irrelevant in this context.) Thus in the extensional sense, “man” means Socrates, Plato, Aristotle, Tom, Dick, Harry, etc. In the intensional sense, “man” means “rational animal.”

A standard logic text summarizes the theory as follows: “The intension of a term, as we have noted, is what is usually called its definition. The extension, on the other hand, simply refers us to the set of objects to which the definition applies. . . . Extension and intension are thus intimately related, but they refer to objects in different ways — extension to a listing of the individuals who fall within its quantitative scope, intension to the qualities or characteristics of the individuals.” (Lionel Ruby, Logic: An introduction.)

This theory introduces another artificial split: between an existent and its characteristics. In the sense in which a concept means its referents (its extensional meaning), it does not mean or refer to their characteristics (its intensional meaning), and vice versa. One’s choice, in effect, is: either to mean existents, apart from their characteristics — or (certain) characteristics, apart from the existents which possess them.

In fact, neither of these alleged types of meaning is metaphysically or epistemologically possible.

A concept cannot mean existents, apart from their characteristics. A thing is — what it is; its characteristics constitute its identity. An existent apart from its characteristics would be an existent apart from its identity, which means: a nothing, a non-existent. To be conscious of an existent is to be conscious of (some of) its characteristics. This is true on all levels of consciousness, but it is particularly obvious on the conceptual level. When one conceptualizes a group of existents, one isolates them mentally from others, on the basis of certain of their characteristics. A concept cannot integrate — or mean — a miscellaneous grab bag of objects; it can only integrate, designate, refer to and mean: existents of a certain kind, existents possessing certain characteristics.

Nor can the concept of an existent mean its characteristics (some or all), apart from the existent. It is not a disembodied, Platonic universal. Just as a concept cannot mean existents apart from their identity, so it cannot mean identity apart from that which exists. Existence is Identity (Atlas Shrugged).

The theory that a concept means its definition, is not improved when it is combined with the view that, in another sense, a concept means its “extension.” Two errors do not make a truth. They merely produce greater chaos and confusion. The truth is that a concept means the existents it integrates, including all their characteristics. It is the view of a concept’s meaning that keeps man’s concepts anchored to reality. On this view, the dichotomy between “analytic” and “synthetic” propositions cannot arise.

Necessity and Contingency

The theory of the analytic-synthetic dichotomy has its roots in two types of error: one epistemological, the other metaphysical. The epistemological error, as I have discussed, is an incorrect view of the nature of concepts. The metaphysical error is: the dichotomy between the necessary and contingent facts.

This theory goes back to Greek philosophy, and was endorsed in some form by virtually all philosophical traditions prior to Kant. In the form in which it is here relevant, the theory holds that some facts are inherent in the nature of reality; they must exist; they are “necessary.” Other facts, however, happen to exist in the world that men now observe, but they did not have to exist; they could have been otherwise; they are “contingent.” For instance, that water is wet would be a “necessary” fact; that water turns to ice at a given temperature, would be “contingent.”

Given this dichotomy, the question arises: How does one know in a particular case, that a certain fact is necessary? Observation, it was commonly said, is insufficient for this purpose. “Experience,” wrote Kant in the Critique of Pure Reason, “tells us, indeed, what is, but not that it must necessarily be so, and not otherwise.” To establish that something is a fact, one employs observation and the appropriate inductive procedures; but, it was claimed, to establish that something is a fact is not yet to show that the fact in question is necessary. Some warrant or guarantee, over and above the fact’s existence, is required if the fact is to be necessary; and some insight, over and above that yielded by observation and induction, is required to grasp this guarantee.

In the pre-Kantian era, it was common to appeal to some form of “intellectual intuition” for this purpose. In some cases, it was said, one could just “see” that a certain fact was necessary. How one could see this remained a mystery. It appeared that human beings had a strange, inexplicable capacity to grasp by unspecified means that certain facts not only were, but had to be. In other cases, no such intuition operated, and the facts in question were deemed contingent.

In the post-Kantian era, appeals to “intellectual intuition” lost favor among philosophers, but the necessary-contingent dichotomy went on. Perpetuated in various forms in the nineteenth century, it was reinterpreted in the twentieth as follows: since facts are learned only by experience, and experience does not reveal necessity, the concept of “necessary facts” must be abandoned. Facts, it is now held, are one and all contingent — and the propositions describing them are “contingent truths.” As for necessary truths, they are merely the products of man’s linguistic or conceptual conventions. They do not refer to facts, they are empty, “analytic,” “tautological.” In this manner, the necessary-contingent dichotomy is used to support the alleged distinction between analytic and synthetic propositions. Today, it is a commonplace for philosophers to remark that “factual” statements are “synthetic” and “contingent,” whereas “necessary” statements are “non-factual” and “analytic.”

(Contemporary philosophers prefer to talk about propositions or statements, rather than about facts; they rarely say that facts are contingent, attributing contingency instead to statements about facts. There is nothing to justify this mode of speech, and I shall not adhere to it in discussing their views.)

Observe that both traditional pre-Kantians and the contemporary conventionalists are in essential agreement: both endorse the necessary-contingent dichotomy, and both hold that necessary truths cannot be validated by experience. The difference is only this: for the traditional philosophers, necessity is a metaphysical phenomenon, grasped by an act of intuition; for the conventionalists, it is a product of man’s subjective choices. The relationship between the two viewpoints is similar to the relationship between Platonists and nominalists on the issue of essences. In both cases, the moderns adopt the fundamentals of the traditionalist position; their “contribution” is merely to interpret that position in an avowedly subjectivist manner.

In the present issue, the basic error of both schools is the view that facts, some or all, are contingent. As far as metaphysical reality is concerned (omitting human actions from consideration, for the moment), there are no “facts which happen to be but could have been otherwise” as against “facts which must be.” There are only: facts which are.

The view that facts are contingent — that the way things actually are is only one among a number of alternative possibilities, that things could have been different metaphysically — represents a failure to grasp the Law of Identity. Since things are what they are, since everything that exists possesses a specific identity, nothing in reality can occur causelessly or by chance. The nature of an entity determines what it can do and, in any given set of circumstances, dictates what it will do. The Law of Causality is entailed by the Law of Identity. Entities follow certain laws of action in consequence of their identity, and have no alternative to doing so.

Metaphysically, all facts are inherent in the identities of the entities that exist; i.e., all facts are “necessary.” In this sense, to be is to be “necessary.” The concept of “necessity,” in a metaphysical context, is superfluous.

(The problem of epistemology is: how to discover facts, how to discover what is. Its task is to formulate the proper methods of induction, the methods of acquiring and validating scientific knowledge. There is no problem of grasping that a fact is necessary, after one has grasped that it is a fact.)

For many centuries, the theory of “contingent facts” was associated with a supernaturalistic metaphysics; such facts, it was said, are the products of a divine creator who could have created them differently — and who can change them at will. This view represents the metaphysics of miracles — the notion that an entity’s actions are unrelated to its nature, that anything is possible to an entity regardless of its identity. On this view, an entity acts as it does, not because of its nature, but because of an omnipotent God’s decree.

Contemporary advocates of the theory of “contingent facts” hold, in essence, the same metaphysics. They, too, hold that anything is possible to an entity, that its actions are unrelated to its nature, that the universe which exists is only one of a number of “possible worlds.” They merely omit God, but they retain the consequences of the religious view. Once more, theirs is a secularized mysticism.

The fundamental error in all such doctrines is the failure to grasp that existence is a self-sufficient primary. It is not a product of a supernatural dimension, or of anything else. There is nothing antecedent to existence, nothing apart from it — and no alternative to it. Existence exists — and only existence exists. Its existence and its nature are irreducible and unalterable.

The climax of the “miraculous” view of existence is represented by those existentialists who echo Heidegger, demanding: “Why is there any being at all and not rather nothing?” — i.e., why does existence exist? This is the projection of a zero as an alternative to existence, with the demand that one explain why existence exists and not the zero.

Non-existentialist philosophers typically disdain Heidegger’s alleged question, writing it off as normal existentialist lunacy. They do not apparently realize that in holding facts to be contingent, they are committing the same error. When they claim that facts could have been otherwise, they are claiming that existence could have been otherwise. They scorn the existentialists for projecting an alternative to the existence of existence, but spend their time projecting alternatives to the identity of existence.

While the existentialists clamor to know why there is something and not nothing, the non-existentialists answer them (by implication): “This is a ridiculous question. Of course, there is something. The real question is: Why is the something what it is, and not something else?”

A major source of confusion, in this issue, is the failure to distinguish metaphysical facts from man-made facts — i.e., facts which are inherent in the identities of that which exists, from facts which depend upon the exercise of human volition. Because man has free-will, no human choice — and no phenomenon which is a product of human choice — is metaphysically necessary. In regard to any man-made fact, it is valid to claim that man has chosen thus, but it was not inherent in the nature of existence for him to have done so; he could have chosen otherwise. For instance, the U.S. did not have to consist of 50 states; men could have subdivided the larger ones or consolidated the smaller ones, etc.

Choice, however, is not chance. Volition is not an exception to the Law of Causality; it is a type of causation. Further, metaphysical facts are unalterable by man, and limit the alternatives open to his choice. Man can rearrange the materials that exist in reality, but he cannot violate their identity; he cannot escape the laws of nature. “Nature, to be commanded, must be obeyed.”

Only in regard to the man-made is it valid to claim: “It happens to be, but it could have been otherwise.” Even here, the term “contingent” is highly misleading. Historically, that term has been used to designate a metaphysical category of much wider scope than the realm of human action; and it has always been associated with a metaphysics which, in one form or another, denies the facts of Identity and Causality. The “necessary-contingent” terminology serves only to introduce confusion, and should be abandoned. What is required in this context is the distinction between the “metaphysical” and the “man-made.”

The existence of human volition cannot be used to justify the theory that there is a dichotomy of propositions or of truths. Propositions about metaphysical facts and propositions about man-made facts do not have different characteristics qua propositions. They differ merely in their subject matter, but then so do the propositions of astronomy and immunology. Truths about metaphysical and about man-made facts are learned and validated by the same process: by observation; and, qua truths, both are equally necessary. Some facts are not necessary, but all truths are.

Truth is the identification of a fact of reality. Whether the fact in question is metaphysical or man-made, the fact determines the truth: if the fact exists, there is no alternative in regard to what is true. For instance, the fact that the U.S. has 50 states was not metaphysically necessary — but as long as this is men’s choice, the proposition that “The U.S. has 50 states” is necessarily true. A true proposition must describe the facts as they are. In this sense, a “necessary truth” is a redundancy, and a “contingent truth” is a self-contradiction.

Logic and Experience

Throughout its history, philosophy has been torn by the conflict between the rationalists and the empiricists. The former stress the role of logic in man’s acquisitions of knowledge, while minimizing the role of experience; the latter claim that experience is the source of man’s knowledge, while minimizing the role of logic. This split between logic and experience is institutionalized in the theory of the analytic-synthetic dichotomy.

Analytic statements, it is said, are independent of experience; they are “logical” propositions. Synthetic statements, on the other hand, are devoid of logical necessity; they are “empirical” propositions.

Any theory that propounds an opposition between the logical and the empirical, represents a failure to grasp the nature of logic and its role in human cognition. Man’s knowledge is not acquired by logic apart from experience or by experience apart from logic, but by the application of logic to experience. All truths are the product of a logical identification of the facts of experience.

Man is born tabula rasa; all his knowledge is based on and derived from the evidence of his senses. To reach the distinctively human level of cognition, man must conceptualize his perceptual data — and conceptualization is a process which is neither automatic nor infallible. Man needs to discover a method to guide this process, if it is to yield conclusions which correspond to the facts of reality — i.e., which represent knowledge. The principle at the base of the proper method is the fundamental principle of metaphysics: the Law of Identity. In reality, contradictions cannot exist; in a cognitive process, a contradiction is the proof of an error. Hence the method man must follow: to identify the facts he observes, in a non-contradictory manner. This method is logic — “the art of non-contradictory identification.” (Atlas Shrugged.) Logic must be employed at every step of a man’s conceptual development, from the formation of his first concepts to the discovery of the most complex scientific laws and theories. Only when a conclusion is based on a noncontradictory identification and integration of all the evidence available at a given time, can it qualify as knowledge.

The failure to recognize that logic is a man’s method of cognition, has produced a brood of artificial splits and dichotomies which represent restatements of the analytic-synthetic dichotomy from various aspects. Three in particular are prevalent today: logical truth vs. factual truth; the logically possible vs. the empirically possible; and the a priori vs. the a posteriori.

The logical-factual dichotomy opposes truths which are validated “merely” by the use of logic (the analytic ones), to truths which describe the facts of experience (the synthetic ones). Implicit in this dichotomy is the view that logic is a subjective game, a method of manipulating arbitrary symbols, not a method of acquiring knowledge.

It is the use of logic that enables man to determine what is and what is not a fact. To introduce an opposition between the “logical” and the “factual” is to create a split between consciousness and existence, between truths in accordance with man’s method of cognition and truths in accordance with the facts of reality. The result of such a dichotomy is that logic is divorced from reality (“Logical truths are empty and conventional”) — and reality becomes unknowable (“Factual truths are contingent and uncertain”). This amounts to the claim that man has no method of cognition, i.e., no way of acquiring knowledge.

The acquisition of knowledge, as Ayn Rand has observed, involves two fundamental functions: “What do I know?” and “How do I know it?” The advocates of the logical-factual dichotomy tell man, in effect: “You can’t know the ‘what’ — because there is no ‘how.’” (These same philosophers claim to know the truth of their position by means of an unanswerable logical argument.)

To grasp the nature of their epistemological procedure, consider a mathematician who would claim that there is a dichotomy between two types of truth in the manner of adding columns of figures: truths which state the actual sum of a given column versus truths which are reached by adherence to the laws of addition — the “summational truths” vs. the “additive truths.” The former represent the actual sums — which, however, are unfortunately unprovable and unknowable, since they cannot be arrived at by the methods of addition; the latter, which are perfectly certain and necessary, are unfortunately a subjective fantasy-creation, with no relationship to actual sums in the actual world. (At this point, a pragmatist mathematician comes along and provides his “solution”: “Adding,” he tells us, “may be subjective, but it works.” Why does it? How does he know it does? What about tomorrow? “Those questions,” he replies, “aren’t fruitful.”)

If mathematicians were to accept this doctrine, the destruction of mathematics would follow. When philosophers accept such a doctrine, the same consequences may be expected — with only this difference: the province of philosophy embraces the total of human knowledge.

Another restatement of the analytic-synthetic dichotomy is the view that opposes the “logically” possible and the “empirically” possible.

If the proposition that a give phenomenon exists is not self-contradictory, then that phenomenon, it is claimed, is “logically” possible; if the proposition is self-contradictory, then the phenomenon is “logically” impossible. Certain phenomena, however, although logically possible, are contrary to the “contingent” laws of nature that men discover by experience; these phenomena are “empirically” — but not “logically” — impossible. Thus, a married bachelor is “logically” impossible; but a bachelor who can fly to the moon by means of flapping his arms is merely “empirically” impossible (i.e., the proposition that such a bachelor exists is not self-contradictory, but such a bachelor is not in accordance with the laws that happen to govern the universe).

The metaphysical basis of this dichotomy is the premise that a violation of the laws of nature would not involve a contradiction. But as we have seen, the laws of nature are inherent in the identities of the entities that exist. A violation of the laws of nature would require that an entity act in contradiction to its identity; i.e., it would require the existence of a contradiction. To project such a violation is to endorse the “miraculous” view of the universe, as already discussed.

The epistemological basis of this dichotomy is the view that a concept consists only of its definition. According to the dichotomy, it is logically impermissible to contradict the definition of a concept; what one asserts by this means is “logically” impossible. But to contradict any of the non-defining characteristics of a concept’s referents, is regarded as logically permissible; what one asserts in such a case is merely “empirically” impossible.

Thus, a “married bachelor” contradicts the definition of “bachelor” and hence is regarded as “logically” impossible. But a “bachelor who can fly to the moon by means of flapping his arms” is regarded as “logically” possible, because the definition of “bachelor” (“an unmarried man”) does not specify his means of locomotion. What is ignored here is the fact that the concept “bachelor” is a subcategory of the concept “man,” that as such it includes all the characteristics of the entity “man,” and that these exclude the ability to fly by flapping his arms. Only by reducing a concept to its definition and by evading all the other characteristics of its referents can one claim that such projections do not involve a self-contradiction.

Those who attempt to distinguish the “logically” possible and the “empirically” possible commonly maintain that the “logically” impossible is unimaginable or inconceivable, whereas the merely “empirically” impossible is at least imaginable or conceivable, and that this difference supports the distinction. For instance, “ice which is not solid” (a “logical” impossibility) is inconceivable; but “ice which sinks in water” (a merely “empirical” impossibility) is at least conceivable, they claim, even though it does not exist; one need merely visualize a block of ice floating on water, and suddenly plummeting straight to the bottom.

This argument confuses Walt Disney with metaphysics. That a man can project an image or draw an animated cartoon at variance with the facts of reality, does not alter the facts; it does not alter the nature or the potentialities of the entities which exist. An image of ice sinking in water does not alter the nature of ice; it does not constitute evidence that it is possible for ice to sink in water. It is evidence only of man’s capacity to engage in fantasy. Fantasy is not a form of cognition.

Further: the fact that man possesses the capacity to fantasize does not mean that the opposite of demonstrated truths is “imaginable” or “conceivable.” In a serious, epistemological sense of the word, a man cannot conceive the opposite of a proposition he knows to be true (as apart from propositions dealing with man-made facts). If a proposition asserting a metaphysical fact has been demonstrated to be true, this means that that fact has been demonstrated to be inherent in the identities of the entities in question, and that any alternative to it would require the existence of a contradiction. Only ignorance or evasion can enable a man to attempt to project such an alternative. If a man does not know that a certain fact has been demonstrated, he will not know that its denial involves a contradiction.  If a man does know it, but evades his knowledge and drops his full cognitive context, there is no limit to what he can pretend to conceive. But what one can project by means of ignorance or evasion, is philosophically irrelevant. It does not constitute a basis for instituting two separate categories of possibility.

There is no distinction between the “logically” and the “empirically” possible (or impossible). All truths, as I have said, are the product of a logical identification of the facts of experience. This applies as much to the identification of possibilities as of actualities.

The same considerations invalidate the dichotomy between the a priori and the a posteriori. According to this variant, certain propositions (the analytic ones) are validated independently of experience, simply by an analysis of the definitions of their constituent concepts; these propositions are “a priori.” Others (the synthetic ones) are dependent upon existence for their validation; they are “a posteriori.”

As we have seen, definitions represent condensations of a wealth of observations, i.e., a wealth of “empirical” knowledge; definitions can be arrived at and validated only on the basis of experience. It is senseless, therefore, to contrast propositions which are true “by experience.” If an “empirical” truth is one derived from, and validated by reference to, perceptual observations, then all truths are “empirical.” Since truth is the identification of a fact of reality, a “non-empirical truth” would be identification of a fact of reality which is validated independently of observation of reality. This would imply a theory of innate ideas, or some equally mystical construct.

Those who claim to distinguish a posteriori and a priori propositions commonly maintain that certain truths (the synthetic, factual ones) are “empirically falsifiable,” whereas others (the analytic, logical ones) are not. In the former case, it is said, one can specify experiences which, if they occurred, would invalidate the proposition; in the latter, one cannot. For instance, the proposition “Cats give birth only to kittens” is “empirically falsifiable” because one can invent experiences that would refute it such as the spectacle of tiny elephants emerging from a cat’s womb. But the proposition “Cats are animals” is not “empirically falsifiable” because “cat” is defined as a species of animal. In the former case, the proposition remains true only as long as experience continues to bear it out; therefore, it depends on experience, i.e., it is a posteriori. In the latter case, the truth of the proposition is immune to any imaginable change in experience and, therefore, is independent of experience, i.e., is a priori.

Observe the inversion propounded by this argument: a proposition can qualify as a factual empirical truth only if man is able to evade the facts of experience and arbitrarily to invent a set of impossible circumstances that contradict these facts; but a truth whose opposite is beyond man’s power of invention, is regarded as independent of and irrelevant to the nature of reality, i.e., as an arbitrary product of human “convention.”

Such is the unavoidable consequence of the attempt to divorce logic and experience.

As I have said, knowledge cannot be acquired by experience apart from logic, nor by logic apart from experience. Without the use of logic, man has no method of drawing conclusions from his perceptual data; he is confined to range-of-the-moment observations, but any perceptual fantasy that occurs to him qualifies as a future possibility which can invalidate his “empirical” propositions. And without reference to the facts of experience, man has no basis for his “logical” propositions, which become mere arbitrary products of his own invention. Divorced from logic, the arbitrary exercise of the human imagination systematically undercuts the “empirical”; and divorced from the facts of experience, the same imagination arbitrarily creates the “logical.”

I challenge anyone to invent a more thorough way of invalidating all of human knowledge.

Conclusion

The ultimate result of the theory of the analytic-synthetic dichotomy is the following verdict pronounced on human cognition: if the denial of a proposition is inconceivable, if there is no possibility that any fact of reality can contradict it, i.e., if the proposition represents knowledge which is certain, then it does not represent knowledge of reality. In other words: if a proposition cannot be wrong, it cannot be right. A proposition qualifies as factual only when it asserts facts which are still unknown, i.e., only when it represents a hypothesis; should a hypothesis be proved and become a certainty, it ceases to refer to facts and ceases to represent knowledge of reality. If a proposition is conclusively demonstrated — so that to deny it is obviously to endorse a logical contradiction — then in virtue of this fact, the proposition is written off as a product of human convention or arbitrary whim.

This means: a proposition is regarded as arbitrary precisely because it has been logically proved. The fact that a proposition cannot be refuted, refutes it (i.e., removes it from reality). A proposition can retain a connection to facts only insofar as it has not been validated by man’s method of cognition, i.e., by the use of logic. Thus proof is made the disqualifying element of knowledge, and knowledge is made a function of human ignorance.

This theory represents a total epistemological inversion: it penalizes cognitive success for being success. Just as the altruist mentality penalizes the good for being the good, so the analytic-synthetic mentality penalizes knowledge for being knowledge. Just as, according to altruism, a man is entitled only to what he has not earned, so, according to this theory, a man is entitled to claim as knowledge only what he has not proved. Epistemological humility becomes the prerequisite of cognition: “the meek shall inherit the truth.”

The philosopher most responsible for these inversions is Kant. Kant’s system secularized the mysticism of the preceding centuries and thereby gave it a new lease on life in the modern world. In the religious tradition, “necessary” truths were commonly held to be consequences of God’s mode of thought. Kant substituted the “innate structure of the human mind” for God, as the source and creator of “necessary” truths (which thus became independent of the facts of reality).

The philosophers of the twentieth century merely drew the final consequences of the Kantian view. If it is man’s mode of thought (independent of reality) that creates “necessary” truths, they argued, then these are not fixed or absolute; men have a choice in regard to their modes of thought; what the mind giveth; the mind taketh away. Thus, the contemporary conventionalist viewpoint.

We can know only the “phenomenal,” mind-created realm, according to Kant; in regard to reality, knowledge is impossible. We can be certain only within the realm of our own conventions, according to the moderns; in regard to facts, certainty is impossible.

The moderns represent a logical, consistent development from Kant’s premises. They represent Kant plus choice — a voluntaristic Kantianism, a whim-worshipping Kantianism. Kant marked the cards and made reason an agent of distortion. The moderns are playing with the same deck; their contribution is to play it deuces wild, besides.

Now observe what is left of philosophy in consequence of this neo-Kantianism.

Metaphysics has been all but obliterated: its most influential opponents have declared that all metaphysical statements are neither analytic nor synthetic, and therefore are meaningless.

Ethics has been virtually banished from the province of philosophy: some groups have claimed that ethical statements are neither analytic nor synthetic, but are mere “emotive ejaculations” — and other groups have consigned ethics to the province of the man in the street, claiming that philosophers may analyze the language of ethical statements, but are not competent to prescribe ethical norms.

Politics has been discarded by virtually all philosophic schools: insofar as politics deals with values, it has been relegated to the same status as ethics.

Epistemology, the theory of knowledge, the science that defines the rules by which man is to acquire knowledge of facts, has been disintegrated by the notion that facts are the subject matter of “synthetic,” “empirical” propositions and, therefore, are outside the province of philosophy — with the result that the special sciences are now left adrift in a rising tide of irrationalism.

What we are witnessing is the self-liquidation of philosophy.

To regain philosophy’s realm, it is necessary to challenge and reject the fundamental premises which are responsible for today’s debacle. A major step in that direction is the elimination of the death carrier known as the analytic-synthetic dichotomy.

 

The American School: Why Johnny Can’t Think

This lecture was delivered at Boston’s Ford Hall Forum in April 1984, published in the October – December 1984 issues of The Objectivist Forum and anthologized in The Voice of Reason: Essays in Objectivist Thought in 1989.

We are now a few hours from Income Tax Day in George Orwell’s year — an ominous moment, symbolically, when we feel acutely the weight of an ever growing government, and must begin to wonder what will happen next and how long our liberty can last.

The answer depends on the youth of the country and on the institutions that educate them. The best indicator of our government tomorrow is our schools today. Are our youngsters being brought up to be free, independent, thinking men and women? Or are they being turned into helpless, mindless pawns, who will run into the arms of the first dictator that sounds plausible?

One does not have to be an Objectivist to be alarmed about the state of today’s schools. Virtually everybody is in a panic over them — shocked by continuously falling SAT scores; by college entrants unable to write, spell, paragraph, or reason; by a generation of schoolteachers so bad that even teachers-union president Albert Shanker says of them: “For the most part, you are getting illiterate, incompetent people who cannot go into any other field.” 1 Quoted in USA Today, Aug. 12, 1983.

Last November, a new academic achievement test was given to some six hundred sixth-grade students in eight industrialized countries. The American students, chosen to be representative of the nation, finished dead last in mathematics, miles behind the Japanese, and sixth out of eight in science. As to geography, 20 percent of the Americans at one school could not find the U.S. on a world map. The Chicago Tribune reported these findings under the headline: “Study hands world dunce cap to U.S. pupils.” 2 Dec. 12, 1983.

A year ago, the National Commission on Excellence in Education described the United States as “a nation at risk,” pointing to what it called “a rising tide of mediocrity [in our schools] that threatens our very future as a nation and as a people.” 3 Quoted in the New York Times, Apr. 27, 1983. These are extreme words for normally bland government commissioners, but the words are no exaggeration.

To prepare for this evening’s discussion, I did some first-hand research. I spent two weeks in February visiting schools in New York City, both public and private, from kindergarten through teachers college. I deliberately chose schools with good reputations — some of which are the shining models for the rest of the country; and I let the principals guide me to their top teachers. I wanted to see the system not when it was just scraping by, starved for money and full of compromises, but at its best, when it was adequately funded, competently staffed, and proud of its activities. I got an eyeful.

My experience at one school, a famous Progressive institution, will serve to introduce my impression of the whole system. I had said that I was interested in observing how children are taught concepts, and the school obligingly directed me to three classes. The first, for nine- and ten-year-olds, was a group discussion of thirteen steps in seal-hunting, from cutting the hole in the ice at the start to sharing the blubber with others at the end. The teacher gave no indication of the purpose of this topic, but he did indicate that the class would later perform a play on seal-hunting and perhaps even computerize the steps. The next class, for thirteen-year-olds, consisted of a mock Washington hearing on the question of whether there should be an import tax on Japanese cars; students played senators, Japanese lobbyists, Lee Iacocca, and so on, and did it quite well; the teacher sat silently, observing. I never learned the name of this course or of the seal-hunting one, but finally I was to observe a meeting described to me as a class in English. At last, I thought, an academic subject. But no. The book being covered was Robert Kennedy’s Thirteen Days, a memoir of the Cuban missile crisis of 1962; a typical topic for discussion was whether a surgical air strike against Cuba would have been better policy than a blockade.

The school, undoubtedly, would defend these classes as exercises in ethnicity or democracy or relevance, but, whatever the defense, the fact is that all these classes were utterly concrete-bound. Seal-hunting was not used to illustrate the rigors of northern life or the method of analyzing a skill into steps or anything at all. The issue of taxing Japanese cars was not related to a study of free trade vs. protectionism, or of the proper function of government, or of the principles of foreign policy, or of any principles. The same applies to the Cuban discussion. In all cases, a narrow concrete was taught, enacted, discussed, argued over in and of itself, i.e., as a concrete, without connection to any wider issue. This is the essence of the approach that, in various forms, is destroying all of our schools: the anti-conceptual approach.

Let me elaborate for a moment on the crucial philosophic point involved here.

Man’s knowledge begins on the perceptual level, with the use of the five senses. This much we share with the animals. But what makes us human is what our mind does with our sense experiences. What makes us human is the conceptual level, which includes our capacity to abstract, to grasp common denominators, to classify, to organize our perceptual field. The conceptual level is based on the perceptual, but there are profound differences between the two — in other words, between perceiving and thinking. Here are some of the differences; this is not an exhaustive list, merely enough to indicate the contrast.

The perceptual level is concerned only with concretes. For example, a man goes for a casual stroll on the beach — let’s make it a drunken stroll so as to numb the higher faculties and isolate the animal element — and he sees a number of concrete entities: those birds chattering over there, this wave crashing to shore, that boulder rolling downhill. He observes, moves on, sees a bit more, forgets the earlier. On the conceptual level, however, we function very differently; we integrate concretes by means of abstractions, and thereby immensely expand the amount of material we can deal with. The animal or drunk merely looks at a few birds, then forgets them; a functioning man can retain an unlimited number, by integrating them all into the concept “bird,” and can then proceed deliberately to study the nature of birds, their anatomy, habits, and so forth.

The drunk on his walk is aware of a vast multiplicity of things. He lurches past a chaos made of waves, rocks, and countless other entities, and has no ability to make connections among them. On the conceptual level, however, we do not accept such chaos; we turn a multiplicity into a unity by finding the common denominators that run through all the seemingly disconnected concretes; and we thereby make them intelligible. We discover the law of gravity, for example, and grasp that by means of a single principle we can understand the falling boulder, the rising tide, and many other phenomena.

On the perceptual level, no special order is necessary. The drunk can totter from bird to rock to tree in any order he wishes and still see them all. But we cannot do that conceptually; in the realm of thought, a definite progression is required. Since we build knowledge on previous knowledge, we need to know the necessary background, or context, at each stage. For example, we cannot start calculus before we know arithmetic — or argue about tariff protection before we know the nature of government.

Finally, for this brief sketch: on the perceptual level, there is no need of logic, argument, proof; a man sees what he sees, the facts are self-evident, and no further cognitive process is required. But on the conceptual level, we do need proof. We need a method of validating our ideas; we need a guide to let us know what conclusions follow from what data. That guide is logic.

Perception as such, the sheer animal capacity, consists merely in staring at concretes, at a multiplicity of them, in no order, with no context, no proof, no understanding — and all one can know by this means is whatever he is staring at, as long as he is staring. Conception, however — the distinctively human faculty — involves the formation of abstractions that reduce the multiplicity to an intelligible unity. This process requires a definite order, a specific context at each stage, and the methodical use of logic.

Now let us apply the above to the subject of our schools. An education that trains a child’s mind would be one that teaches him to make connections, to generalize, to understand the wider issues and principles involved in any topic. It would achieve this feat by presenting the material to him in a calculated, conceptually proper order, with the necessary context, and with the proof that validates each stage. This would be an education that teaches a child to think.

The complete opposite — the most perverse aberration imaginable — is to take conceptual-level material and present it to the students by the method of perception. This means taking the students through history, literature, science, and the other subjects on the exact model of that casual, unthinking, drunken walk on the beach. The effect is to exile the student to a no-man’s-land of cognition, which is neither perception nor conception. What it is, in fact, is destruction, the destruction of the minds of the students and of their motivation to learn.

This is literally what our schools are doing today. Let me illustrate by indicating how various subjects are taught, in the best schools, by the best teachers. You can then judge for yourself why Johnny can’t think.

I went to an eighth-grade class on Western European history in a highly regarded, non-Progressive school with a university affiliation. The subject that day was: why does human history constantly change? This is an excellent question, which really belongs to the philosophy of history. What factors, the teacher was asking, move history and explain men’s past actions? Here are the answers he listed on the board: competition among classes for land, money, power, or trade routes; disasters and catastrophes (such as wars and plagues); the personality of leaders; innovations, technology, new discoveries (potatoes and coffee were included here); and developments in the rest of the world, which interacts with a given region. At this point, time ran out. But think of what else could qualify as causes in this kind of approach. What about an era’s press or media of communication? Is that a factor in history? What about people’s psychology, including their sexual proclivities? What about their art or their geography? What about the weather?

Do you see the hodgepodge the students are being given? History, they are told, is moved by power struggles and diseases and potatoes and wars and chance personalities. Who can make sense out of such a chaos? Here is a random multiplicity thrown at a youngster without any attempt to conceptualize it — to reduce it to an intelligible unity, to trace the operation of principles. This is perceptual-level history, history as nothing but a torrent of unrelated, disintegrated concretes.

The American Revolution, to take a specific example, was once taught in the schools on the conceptual level. The Revolution’s manifold aspects were identified, then united and explained by a principle: the commitment of the colonists to individual rights and their consequent resolve to throw off the tyrant’s yoke. This was a lesson students could understand and find relevant in today’s world. But now the same event is ascribed to a whole list of alleged causes. The students are given ten (or fifty) causes of the Revolution, including the big landowners’ desire to preserve their estates, the Southern planters’ desire for a cancellation of their English debts, the Bostonians’ opposition to tea taxes, the Western land speculators’ need to expand past the Appalachians, etc. No one can retain such a list longer than is required to pass the exam; it must be memorized, then regurgitated, then happily and thoroughly forgotten. That is all one can do with unrelated concretes.

If the students were taught by avowed Marxists — if they were told that history reflects the clash between the factors of production and the modes of ownership — it would be dead wrong, but it would still be a principle, an integrating generalization, and it would be much less harmful to the students’ ability to think; they might still be open to argument on the subject. But to teach them an unconceptualized hash is to imply that history is a tale told by an idiot, without wider meaning, or relevance to the present. This approach destroys the possibility of the students thinking or caring at all about the field.

I cannot resist adding that the State Education Department of New York has found a way, believe it or not, to make the teaching of history still worse. You might think that, in history at least, the necessary order of presenting the material is self-evident. Since each era grows out of the preceding, the obvious way to teach events is as they happened, i.e., chronologically. But not according to a new proposal. In order “to put greater emphasis on sociological, political, and economic issues,” a New York State proposal recommends that historical material be organized for the students according to six master topics picked out of the blue from the pop ethos: “ecology, human needs, human rights, cultural interaction, the global system of economic interdependence, and the future.” In this approach, an event from a later period can easily be taught (in connection with one master topic) first, long before the developments from an earlier period that actually led to it. As a more traditional professor from Columbia has noted: “The whole thing would be wildly out of chronological order. The [Russian] purge trials of the 1930s would be taught before the revolutions of 1905 and 1917. It is all fragmented and there is no way that this curriculum relates one part of a historical period to another, which is what you want kids to be able to do.’’ 4 The New York Times, Apr. 18, 1983; the professor is Hazel Hertzberg. But the modern educators don’t seem to care about that. They want “fragments,” i.e., concretes, without context, logic, or any other demands of a conceptual progression.

I do not know what became of this New York proposal. The fact that it was announced to the press and discussed seriously is revealing enough.

Given the way history is now being taught, it is not surprising that huge chunks of it promptly get forgotten by the students or simply are never taken in. The result is many adolescents’ shocking ignorance of the most elementary historical, or current, facts. One man wrote a column recently in the Washington Post recounting his conversations with today’s teenagers. He found high school graduates who did not know anything about World War II, including what happened at Pearl Harbor, or what country the United States was fighting in the Pacific. “Who won?” one college student asked him. At one point, the writer and a girl who was a junior at the University of Southern California were watching television coverage of Poland after martial law had been imposed; the set showed political prisoners being put into a cage. The girl could not understand it.

“‘Why don’t they just leave and come to LA.?’” she asked.

“I explained that they were not allowed to leave.”

“‘They’re not?’” she said. “‘Why not?’”

“I explained that in totalitarian states citizens usually could not emigrate.”

“‘They can’t?’” she said. “‘Since when? Is that something new?’” 5 Benjamin J. Stein, “The Cheerful Ignorance of the Young in L.A.,” Oct. 3, 1983.

Now let us make a big jump — from history to reading. Let us look at the method of teaching reading that is used by most American schools in some form: the Look-Say method (as against Phonics).

The method of Phonics, the old-fashioned approach, first teaches a child the sound of individual letters; then it teaches him to read words by combining these sounds. Each letter thus represents an abstraction subsuming countless instances. Once a child knows that p sounds “puh,” for instance, that becomes a principle; he grasps that every p he meets sounds the same way. When he has learned a few dozen such abstractions, he has acquired the knowledge necessary to decipher virtually any new word he encounters. Thus the gigantic multiplicity of the English vocabulary is reduced to a handful of symbols. This is the conceptual method of learning to read.

Modern educators object to it. Phonics, they say (among many such charges), is unreal. I quote from one such mentality: “There is little value in pronouncing the letter p in isolation; it is almost impossible to do this — a vowel of some sort almost inevitably follows the pronunciation of any consonant.” 6 Pose Lamb, Linguistics in Proper Perspective (Charles E. Merrill: 1977, 2nd ed.), p. 29. This means: when you pronounce the sound of p — “puh” — you have to utter the vowel sound “uh”; so you haven’t isolated the pure consonant; so Phonics is artificial. But why can’t you isolate in your mind, focusing only on the consonant sound, ignoring the accompanying vowel for purposes of analysis — just as men focus on a red table’s color but ignore its shape in order to reach the concept “red”? Why does this writer rule out selective attention and analysis, which are the very essence of human cognition? Because these involve an act of abstraction; they represent a conceptual process, precisely the process that modern educators oppose.

Their favored method, Look-Say, dispenses with abstractions. Look-Say forces a child to learn the sounds of whole words without knowing the sounds of the individual letters or syllables. This makes every word a new concrete to be grasped only by perceptual means, such as trying to remember its distinctive shape on the page, or some special picture the teacher has associated with it. Which amounts to heaping on the student a vast multiplicity of concretes and saying: stare at these and memorize them. (You may not be surprised to discover that this method was invented, as far as I can tell, by an eighteenth-century German professor who was a follower of Rousseau, the passionate opponent of reason.)

There is a colossal Big Lie involved in the Look-Say propaganda. Its advocates crusade against the overuse of memory; they decry Phonics because, they say, it requires a boring memorization of all the sounds of the alphabet. Their solution is to replace such brief, simple memorization with the task of memorizing the sound of every word in the language. In fact, if one wishes to save children from the drudgery of endless memorization, only the teaching of abstractions will do it — in any field.

No one can learn to read by the Look-Say method. It is too anti-human. Our schools today, therefore, are busy teaching a new skill: guessing. They offer the children some memorized shapes and pictures to start, throw in a little Phonics (thanks to immense parental pressure), count on the parents secretly teaching their children something at home about reading — and then, given this stew of haphazard clues, they concentrate their efforts on teaching the children assorted methods of guessing what a given word might be.

Here is a Look-Say expert describing a child’s proper mental processes when trying to determine the last word of the sentence, “They make belts out of plastic.” The child must not, of course, try to sound out the letters. Here is what should go on in his brain instead:

“Well, it isn’t leather, because that begins with l. My mother has a straw belt, but it isn’t straw either. It looks like a root. I’ll divide it between s and t. There couldn’t be more than two syllables because there are only two vowels. Let’s see — p, l, a, s. One vowel and it’s not at the end of the syllable . . .” This goes on a while longer, and the child finally comes up with: “Oh, sure, plastic! I’m surprised I didn’t think of that right away because so many things are made of plastic.” The expert comments: “Just described is a child who was not about to carry out a letter-by-letter analysis of plastic if it wasn’t necessary, which is exactly right.” 7 Dolores Durkin, Strategies for Identifying Words, p. 83; quoted in Rudolf Flesch, Why Johnny Still Can’t Read (Harper Colophon: 1983), p. 81.

Can you imagine reading War and Peace by this method? You would die of old age before you reached the third chapter.

I must add that the Look-Say educators demand that children — I quote another devotee — “receive praise for a good guess even though it is not completely accurate. For example, if a child reads ‘I like to eat carrots’ as ‘I like to eat cake,’ praise should be given for supplying a word that makes sense and follows at least some of the phonic cues.” 8 Dixie Lee Spiegel, in Reading Teacher, April 1978; quoted in Flesch, op. cit., p. 24.

How would you like to see, at the head of our army, a general with this kind of schooling? He receives a telegram from the president during a crisis ordering him to “reject nuclear option,’’ proceeds to make a good guess, and reads it as “release nuclear option.” Linguistically, the two are as close as “carrots” and “cake.”

The result of the Look-Say method is a widespread “reading neurosis” among children, a flat inability to read, which never existed in the days of Phonics (and also a bizarre inability to spell). In 1975, for example, 35 percent of fourth-graders, 37 percent of eighth-graders, and 23 percent of twelfth-graders could not read simple printed instructions. The U.S. literacy rate, it has been estimated, is now about equal to that of Burma or Albania, and by all signs is still dropping. Do you see why angry parents are suing school systems for a new crime: educational malpractice?

Now let us look at another aspect of English studies: the teaching of grammar. This subject brings out even more clearly the modern educators’ contempt for concepts.

Grammar is the study of how to combine words — i.e., concepts — into sentences. The basic rules of grammar — such as the need of subject and predicate, or the relation of nouns and verbs — are inherent in the nature of concepts and apply to every language; they define the principles necessary to use concepts intelligibly. Grammar, therefore, is an indispensable subject; it is a science based entirely on facts — and not a very difficult science, either.

Our leading educators, however, see no relation between concepts and facts. The reason they present material from subjects such as history without conceptualizing it, is precisely that they regard concepts as mental constructs without relation to reality. Concepts, they hold, are not a device of cognition, but a mere human convention, a ritual unrelated to knowledge or reality, to be performed according to arbitrary social fiat. It follows that grammar is a set of pointless rules, decreed by society for no objectively defensible reason.

I quote from a book on linguistics written for English teachers by a modern professor: “Because we know that language is arbitrary and changing, a teacher’s attitude toward nonstandard usage should be one of acceptance. . . . One level of language is not ‘better’ than another; this is why the term nonstandard is preferable to substandard in describing such usage as ‘He don’t do it,’ ‘Was you there?’ A person who uses terms such as these will probably be penalized in terms of social and educational advancement in our society, however, and it is for this reason that the teacher helps children work toward, and eventually achieve, standard usage, perhaps as a ‘second’ language.” 9 Lamb, op. cit., p. 19. In short, there is no “correct” or “incorrect” any more, not in any aspect of language; there is only the senseless prejudice of society.

I saw the results of this approach in the classroom. I watched an excellent public-school teacher trying to explain the possessive forms of nouns. She gave a clear statement of the rules, with striking examples and frequent repetition; she was dynamic, she was colorful, she was teaching her heart out. But it was futile. This teacher was not a philosopher of language, and she could not combat the idea, implicit in the textbook and in all the years of the students’ earlier schooling, that grammar is purposeless. The students seemed to be impervious to instruction and incapable of attention, even when the teacher would blow a shrieking police whistle to shock them momentarily into silence. To them, the subject was nothing but senseless rules: the apostrophe goes here in this case, there in that one. Here was a whole science reduced to disintegrated concretes that had to be blindly memorized — just like the ten causes of the American Revolution, or the ten shapes of the last Look-Say session.

You might wonder how one teaches composition — the methods of expressing one’s thoughts clearly and eloquently in writing — given today’s philosophy of grammar and of concepts. I will answer by reading excerpts from a recent manifesto.

“We affirm the students’ right to their own patterns and varieties of language — the dialects of their nurture or whatever dialects in which they find their own identity and style. . . . The claim that any one dialect is unacceptable amounts to an attempt of one social group to exert its dominance over another.” If so, why does anyone need English teachers?

Who issued this manifesto? Was it some ignorant, hotheaded teenagers drunk on the notion of student power? No. It was the National Council of Teachers of English. 10 From Students’ Right to Their Own Language, Conference on College Composition and Communication, Fall 1974; quoted in Arn and Charlene Tibbetts, What’s Happening to American English? (Scribner’s: 1978), p. 118.

If you want a hint as to the basic philosophy operative here, I will mention that the editor of College English, one of the major journals of the profession, objects to “an industrial society [that] will continue to want from us — or someone else — composition, verbal manners, discipline in problem solving, and docile rationality.” 11 See College English, Feb. 1976, p. 631; quoted in Tibbetts, op. cit., p. 119. Note how explicit this is. The climax of his “enemies list” is “rationality.”

Despite today’s subjectivism, some rules of composition are still being taught. Certain of these are valid enough, having been carried over from a better past. But some are horrifying. Here is an exercise in how to write topic sentences. The students are given two possible sentences with which to start a paragraph, then are asked to choose which would make a good opening and which a bad one. Here is one such pair:

1. Cooking is my favorite hobby.

2. It really isn’t hard to stir-fry Chinese vegetables.

The correct answer? Number 1 is bad. It is too abstract. (!) Students should not write about so enormous a subject as an entire hobby. They should focus only on one concrete under it, such as Chinese vegetables.

Here is another pair:

1. There is too much pollution in the world.

2. We have begun to fight pollution in our own neighborhood.

Of course, Number 1 is inadmissible. Students must not think about world problems — that is too vague — only about the dinky concretes in their own backyard. 12 Basic English Skills Practice Book, Orange Level (McDougal, Littell), p. 17.

This sort of exercise has been consciously designed to teach students to be concrete-bound. How are children with such an upbringing ever to deal with or think about problems that transcend Chinese vegetables and their own neighborhood? The implicit answer, absorbed by the students unavoidably, is: “You don’t have to worry about things like that; society or the president will take care of you; all you have to do is adapt.”

Before we leave English, I want to mention what has been happening to the teaching of literature in our schools as a consequence of the attitude toward concepts that we have been discussing. First, there has been the disappearance from the schools of the classics in favor of cheap current novels. The language and themes of the classics are too difficult for today’s students to grasp; one does not teach Shakespeare to savages, or to civilized children being turned into savages. Then, there is the continuous decline even of today’s debased standards. I quote from two English teachers: “Years ago we used to hear that Julius Caesar was too difficult for ninth-graders; now we are told that Lord of the Flies is too hard for the general run of tenth-graders.” Then, there is the final result, now increasingly common: the disappearance of literature of any kind and its replacement by what are called “media classes.” These are classes, in one book’s apt description, that “teach television, newspapers, car-repair magazines, and movies.” 13 Tibbetts, op. cit., pp. 80, 76.

I will pass up all the obvious comments on this frightening descent. I have just one question about it: why should these graduates of TV and car-repair magazines care if the great books of the past are burned by government edict — when they can’t read them anyway?

Turning to the teaching of science in our schools, I want to mention an instructive book written by two professors at Purdue University; titled Creative Sciencing, it tells science teachers how to teach their subject properly. To learn science, the book declares, students must engage in “hands-on science activities.” They must perform a series of concrete “experiments,” such as designing a bug catcher, collecting pictures of objects that begin with a c, going on field trips to the local factory, or finding polluters in the community. (These examples are taken from the book.) There is no necessary order to these activities. The children are encouraged to interact with the classroom materials “in their own way,” as the mood strikes them. They are not to be inhibited by a teacher-imposed structure or by the logic of the subject. 14 Alfred De Vito and Gerald H. Krockover, Creative Sciencing (Little, Brown: 1980), pp. 15, 70, 74, 19.

You may wonder whether students taught in this manner will ever learn the abstract concepts and principles of science, the natural laws and explanatory theories that have been painstakingly discovered across the centuries — the knowledge that makes us civilized men rather than jungle primitives.

The answer has been given by F. James Rutherford, chief education officer of the American Association for the Advancement of Science. “We’re too serious,” he declared. “We insist on all the abstract stuff. We need to relax and let the children learn their own neighborhood.” This statement was made at a meeting of experts brought together by a large foundation to discover what ails science teaching. 15 Quoted in the New York Times, Jan, 31, 1984.

Today’s education, I have said, reduces children to the status of animals, without the ability to know or predict the future. Animals, however, can rely on brute instinct to guide them. Children cannot; brought up this way, they soon begin to feel helpless — to feel that everything is changing and that they can count on nothing.

The above is not merely my polemic. The science teachers are working deliberately to create this state of mind. The teachers are openly skeptical themselves, having been given a similar upbringing, and they insist to their students that everything is changing, that factual information is continuously becoming outdated, and that there are things much more important in class — in science class — than truth. It is hard to believe how brazen these people have become. “When preparing performance objectives,” the Creative Sciencing book says, “you may wish to consider the fact that we don’t demand accuracy in art or creative writing, but we have permitted ourselves to require accuracy in science. We may be paying a high price in lost interest, enthusiasm, vitality, and creativity in science because of this requirement of accuracy.” 16 Op. cit, p. 33.

Our students should not have to be concerned about factual accuracy. They need have no idea whether gases expand or contract under pressure, or whether typhus germs cause or cure disease — but this will leave them free to be “vital” and “creative.”

But, you may ask, what if a student comes out in class with a wrong answer to a factual question? You are old-fashioned. There is no such answer, and besides it would be bad for the student’s psychology if there were: “How many times will a student try to respond to a question if continually told that his or her answers are wrong? Wrong answers should be reserved for quiz shows on television.” 17 Ibid., p. 38.

What then is the point in having a teacher at all? — since there are no wrong answers, and since adults must not be “authoritarian,’’ and since, as John Dewey has proclaimed, students do not learn by listening or by reading, but only by “doing.” This brings me to an extremely important issue, one that is much wider than science teaching.

My overriding impression of today’s schools, derived from every class I visited, is that teachers no longer teach. They no longer deliver prepared material while the students listen attentively and take notes. Instead, what one encounters everywhere is group-talking, i.e., class participation and class discussion. Most of the teachers I saw were enthusiastic professionals, excellent at what they do. But they conceive their role primarily as bull-session moderators. Some of the teachers obviously had a concealed lesson in mind, which they were bootlegging to the students — in the guise of asking leading questions or making brief, purposeful side comments. But the point is that the lesson had to be bootlegged. The official purpose of the class was for the pupils to speak more or less continuously — at any rate, well over half the time.

I asked one group of high school students if their teachers ever delivered lectures in class. “Oh no!” they cried incredulously, as though I had come from another planet or a barbaric past. “No one does that anymore.”

All the arguments offered to defend this anti-teaching approach are senseless.

“Students,” I have heard it said, “should develop initiative; they should discover knowledge on their own, not be spoon-fed by the teachers.” Then why should they go to school at all? Schooling is a process in which an expert is paid to impart his superior knowledge to ignorant beginners. How can this involve shelving the expert and leaving the ignorant to shift for themselves? What would you think of a doctor who told a patient to cure himself because the doctor opposed spoon-feeding?

“Students,” I have heard, “should be creative, not merely passive and receptive.” How can they be creative before they know anything? Creativity does not arise in a void; it can develop only after one has mastered the current cognitive context. A creative ignoramus is a contradiction in terms.

“We teach the method of thought,” I have heard, “rather than the content.” This is the most senseless claim of all. Let us leave aside the obvious fact that method cannot exist apart from some content. The more important point here is that thought is precisely what cannot be taught by the discussion approach. If you want to teach thought, you must first put up a sign at the front of the class: “Children should be seen and not heard.” To be exact: they may be heard as an adjunct of the lesson, if the teacher wishes to probe their knowledge, or answer a question of clarification, or assess their motivation to learn, or entertain a brief comment. But the dominant presence and voice must be that of the teacher, the cognitive expert, who should be feeding the material to the class in a highly purposeful fashion, carefully balancing concretes and abstractions, preparing for and then drawing and then interrelating generalizations, identifying the evidence at each point, etc. These are the processes that must first be absorbed year after year by the student in relation to a whole series of different contents. In the end, such training will jell in his mind into a knowledge of how to think — which he can then apply on his own, without any teacher. But he can never even begin to grasp these processes in the chaotic hullabaloo of a perpetual class discussion with equally ignorant peers.

Have you seen the [1984] television debates among the Democrats seeking to be president? Do you regard these spectacles of arbitrary assertion, constant subject-switching, absurd concrete-boundedness, and brazen ad hominem as examples of thinking? This is exactly the pattern that is being inculcated as thinking today by the class-discussion method.

An educator with any inkling of the requirements of a conceptual consciousness would never dream of running a school this way. But an educator contemptuous of concepts, and therefore of knowledge, would see no objection to it.

In the class discussions I saw, the students were regularly asked to state their own opinion. They were asked it in regard to issues about which they had no idea how to have an opinion, since they had no knowledge of the relevant facts or principles, and no knowledge of the methods of logical argument. Most of the time the students were honest; they had no opinion, in the sense of a sincere, even if mistaken, conviction on the question at hand. But they knew that they were expected to “express themselves.” Time and again, therefore, I heard the following: “I like (or dislike) X.” “Why?”  “Because I do. That’s my opinion.” Whereupon the teacher would nod and say “very interesting” or “good point.” Everybody’s point, it seemed, was good, as good as everybody else’s, and reasons were simply irrelevant. The conclusion being fostered in the minds of the class was: “It’s all arbitrary; anything goes and no one really knows.” The result is not only the spread of subjectivism, but of a self-righteous subjectivism, which cannot even imagine what objectivity would consist of.

Project a dozen years of this kind of daily processing. One study of American students notes that they “generally offered superficial comments . . . and consultants observed that they seemed ‘genuinely puzzled at requests to explain or defend their points of view.’” 18 “Are Your Kids Learning to Think?” Changing Times, Dec. 1983; quoting the National Assessment of Educational Progress. What else could anyone expect?

Now let me quote from a New York Times news story.

“I like [Senator Gary Hart’s] ideas,” said Darla Doyle, a Tampa homemaker. “He’s a good man. His ideas are fresher than Mondale’s are. I like the way he comes across.”

A reporter asked Mrs. Doyle to identify the ideas that appealed to her. “That’s an unfair question,” she said, asking for a moment to consider her answer. Then she replied, “He wants to talk with Russia.”

The headline of this story is: “Hart’s Fans Can’t Say Why They Are.” 19 Mar. 9, 1984.

According to John Dewey, students are bored by lectures, but motivated to learn by collective “doing.” Not the ones I saw. Virtually every class was in continuous turmoil, created by students waving their hands to speak, dropping books, giggling, calling out remarks, whispering asides, yawning, fidgeting, shifting, shuffling. The dominant emotion was a painful boredom, which is the sign of minds being mercilessly starved and stunted. Perhaps this explains the magic influence of the bell. The instant it rang, everywhere I went, the room was empty, as though helpless victims were running for their lives from a dread plague. And so in a sense they were.

Ladies and gentlemen, our schools are failing in every subject and on a fundamental level. They are failing methodically, as a matter of philosophic principle. The anti-conceptual epistemology that grips them comes from John Dewey and from all his fellow irrationalists, who dominate twentieth-century American culture, such as linguistic analysts, psychoanalysts, and neo-Existentialists. And behind all these, as I argued in The Ominous Parallels, stands a century of German philosophy inaugurated by history’s greatest villain: Immanuel Kant, the first man to dedicate his life and his system to the destruction of reason.

Epistemological corruption is not the only cause of today’s educational fiasco. There are many other contributing factors, such as the teachers unions, and the senseless requirements of the teachers colleges, and the government bureaucracies (local and federal). But epistemology is the basic cause, without reference to which none of the others can be intelligently analyzed or remedied.

Now let me recount for you two last experiences, which bear on the political implications of today’s educational trend.

One occurred at the most prestigious teacher-training institution in the country, Teachers College of Columbia University.

In my first class there, chosen at random, the professor made the following pronouncement to a group of sixty future teachers: “The evil of the West is not primarily its economic exploitation of the Third World, but its ideological exploitation. The crime of the West was to impose upon the communal culture of Africa the concept of the individual.” I thought I had heard everything, but this shocked me. I looked around. The future teachers were dutifully taking it down; there were no objections.

Despite their talk about “self-expression,” today’s educators have to inculcate collectivism. Man’s organ of individuality is his mind; deprived of it, he is nothing, and can do nothing but huddle in a group as his only hope of survival.

The second experience occurred in a class of juniors and seniors at a high school for the academically gifted. The students had just returned from a visit to the United Nations, where they had met with an official of the Russian delegation, and they were eager to discuss their reactions. The class obviously disliked the Russian, feeling that his answers to their questions about life in Russia had been evasions or lies. But soon someone remarked that we Americans are accustomed to believing what our government says, while the Russians naturally believe theirs. “So how do I know?” he concluded. “Maybe everything is a lie.”

“What is truth?” asked one boy, seemingly quite sincere; the class laughed, as though this were obviously unanswerable.

“Neither side is good,” said another student. “Both countries lie all the time. But the issue is the percentage. What we need to know is how much they lie — is it 99 percent for one, for example, and 82 percent for the other?”

After a lot more of this, including some pretty weak arguments in favor of America by a small patriotic faction, one boy summed up the emerging consensus. “We can never know who is lying or telling the truth,” he said. “The only thing we can know is bare fact. For example, we can know that a Korean airliner was shot down by the Russians [in 1983]. But as to the Russians’ story of the cause vs. our story, that is mere opinion.”

To which one girl replied in all seriousness: “But we can’t even know that — none of us saw the plane shot down.”

This class discussion was the climax of my tour. I felt as though I were witnessing the condensed essence of a perceptual-level schooling. “Thought,” these students were saying, “is helpless, principles are nonexistent, truth is unknowable, and there is, therefore, no way to choose between the United States of America and the bloodiest dictatorship in history, not unless we have seen the blood with our own eyes.”

These youngsters represent the future of our country. They are the children of the best and the brightest, who will become the businessmen, the artists, and the political leaders of tomorrow. Does this kind of generation have the strength — the intellectual strength, the strength of conviction — necessary to uphold the American heritage in an era dominated by incipient Big Brothers at home and missile-rattling enemies abroad?

It is not the students’ fault, and they do not fully believe the awful things they say, not yet. The ones I saw, at every school except for Columbia — and here I want to register some positive impressions — were extremely likable. For the most part, they struck me as clean-cut, well-mannered, exuberant, intelligent, innocent. They were not like the typical college student one meets, who is already hardening into a brash cynic or skeptic. These youngsters, despite all their doubts and scars, still seemed eager to discover some answers, albeit sporadically. They were still clinging to vestiges of the idea that man’s mind can understand reality and make sense of the world.

They are still open to reason — if someone would teach it to them.

Nor is it basically the teachers’ fault. The ones I saw were not like the college professors I know, who reek of stale malice and delight in wrecking their students’ minds. The teachers seemed to take their jobs seriously; they genuinely liked their classes and wanted to educate them. But given the direction of their own training, they were unable to do it.

There is a whole generation of children who still want to learn, and a profession much of which wants to help them, to say nothing of a country that devoutly wishes both groups well. Everything anyone would need to save the world is there, it is waiting, and all that is required to activate it is . . . what?

Merit pay? First we need a definition of merit, i.e., of the purpose of teaching. More classes in the use of computers? We have enough children who know FORTRAN but not English. Compulsory community service? (A recommendation of the Carnegie Commission.) Prayer in the schools? (President Reagan’s idea of a solution.)

All these are the equivalent of sticking Band-Aids on (or in the last two cases knives into) a dying man. The only real solution, which is a precondition of any other reform, is a philosophic change in our culture. We need a philosophy that will teach our colleges — and thereby our schoolteachers, and thus finally our youngsters — an abiding respect, a respect for reason, for man’s mind, for the conceptual level of consciousness. That is why I subscribe to the philosophy of Ayn Rand. Hers is the only such philosophy in America today. It could be the wonder cure that would revive a generation.

The National Committee on Excellence in Education declared, “If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war.” 20 Quoted in the New York Times, Apr. 27, 1983. Intellectually speaking, however, we are under the yoke of a foreign power. We are under the yoke of Kant, Hegel, Marx, and all their disciples. What we need now is another Declaration of Independence — not political independence from England this time, but philosophical independence from Germany.

To achieve it would be a monumental job, which would take many decades. As part of the job, I want to recommend one specific step to improve our schools: close down the teachers colleges.

There is no rational purpose to these institutions (and so they do little but disseminate poisonous ideas). Teaching is not a skill acquired through years of classes; it is not improved by the study of “psychology” or “methodology” or any of the rest of the stuff the schools of education offer. Teaching requires only the obvious: motivation, common sense, experience, a few good books or courses on technique, and, above all, a knowledge of the material being taught. Teachers must be masters of their subject; this — not a degree in education — is what school boards should demand as a condition of employment.

This one change would dramatically improve the schools. If experts in subject matter were setting the terms in the classroom, some significant content would have to reach the students, even given today’s dominant philosophy. In addition, the basket cases who know only the Newspeak of their education professors would be out of a job, which would be another big improvement.

This reform, of course, would be resisted to the end by today’s educational establishment, and could hardly be achieved nationally without a philosophic change in the country. But it gives us a starting point to rally around that pertains specifically to the field of education. If you are a parent or a teacher or merely a concerned taxpayer, you can start the battle for quality in education by demanding loudly — even in today’s corrupt climate — that the teachers your school employs know what they are talking about, and then talk about it.

“If a nation expects to be ignorant and free . . .” wrote Thomas Jefferson, “it expects what never was and never will be.” 21 The Jeffersonian Cyclopedia, “Freedom and Education,” p. 274.

Let us fight to make our schools once again bastions of knowledge. Then no dictator can rise among us by counting, like Big Brother in 1984, on the enshrinement of ignorance.

And then we may once again have a human future ahead of us.