Religion Versus America

This lecture was delivered at Boston’s Ford Hall Forum on April 20, 1986, published in The Objectivist Forum in June 1986 and anthologized in The Voice of Reason: Essays in Objectivist Thought in 1989.

 

A specter is haunting America — the specter of religion. This, borrowing Karl Marx’s literary style, is my theme tonight. Where do I see religion? The outstanding political fact of the 1980s is the rise of the New Right, and its penetration of the Republican Party under President Reagan. The bulk of the New Right consists of Protestant Fundamentalists, typified by the Moral Majority. These men are frequently allied on basic issues with other religiously oriented groups, including conservative Catholics of the William F. Buckley ilk and neoconservative Jewish intellectuals of the Commentary magazine variety.

All these groups observed the behavior of the New Left awhile back and concluded, understandably enough, that the country was perishing. They saw the liberals’ idealization of drugged hippies and nihilistic Yippies; they saw the proliferation of pornography, of sexual perversion, of noisy Lib and Power gangs running to the Democrats to demand ever more outrageous handouts and quotas; they heard the routine leftist deprecation of the United States and the routine counsel to appease Soviet Russia — and they concluded, with good reason, that what the country was perishing from was a lack of values, of ethical absolutes, of morality.

Values, the Left retorted, are subjective; no lifestyle (and no country) is better or worse than any other; there is no absolute right or wrong anymore — unless, the liberals added, you believe in some outmoded ideology like religion. Precisely, the New Rightists reply; that is our whole point. There are absolute truths and absolute values, they say, which are the key to the salvation of our great country; but there is only one source of such values: not man or this earth or the human brain, but the Deity as revealed in scripture. The choice we face, they conclude, is the skepticism, decadence, and statism of the Democrats, or morality, absolutes, Americanism, and their only possible base: religion — old-time, Judeo-Christian religion.

“Religious America is awakening, perhaps just in time for our country’s sake,” said Mr. Reagan in 1980. “In a struggle against totalitarian tyranny, traditional values based on religious morality are among our greatest strengths.” 1 Quoted in Conservative Digest, Sept. 1980.

“Religious views,” says Congressman Jack Kemp, “lie at the heart of our political system. The ‘inalienable rights’ to life, liberty and the pursuit of happiness are based on the belief that each individual is created by God and has a special value in His eyes. . . . Without a common belief in the one God who created us, there could be no freedom and no recourse if a majority were to seek to abrogate the rights of the minority.” 2 From a symposium on “Sex and God in American Politics,” Policy Review, Summer 1984.

Or, as Education Secretary William Bennett sums up this viewpoint: “Our values as a free people and the central values of the Judeo-Christian tradition are flesh of the flesh and blood of the blood.” 3 Quoted in the New York Times, Aug. 8, 1985.

Politicians in America have characteristically given lip service to the platitudes of piety. But the New Right is different. These men seem to mean their religiosity, and they are dedicated to implementing their religious creeds politically; they seek to make these creeds the governing factor in the realm of our personal relations, our art and literature, our clinics and hospitals, and the education of our youth. Whatever else you say about him, Mr. Reagan has delivered handsomely on one of his campaign promises: he has given the adherents of religion a prominence in setting the national agenda that they have not had in this country for generations.

This defines our subject for tonight. It is the new Republican inspiration and the deeper questions it raises. Is the New Right the answer to the New Left? What is the relation between the Judeo-Christian tradition and the principles of Americanism? Are Ronald Reagan and Jack Kemp, as their admirers declare, leading us to a new era of freedom and capitalism — or to something else?

In discussing these issues, I am not going to say much about the New Right as such; its specific beliefs are widely known. Instead, I want to examine the movement within a broader, philosophical context. I want to ask: what is religion? and then: how does it function in the life of a nation, any nation, past or present? These, to be sure, are very abstract questions, but they are inescapable. Only when we have considered them can we go on to judge the relation between a particular religion, such as Christianity, and a particular nation, such as America.

Let us begin with a definition. What is religion as such? What is the essence common to all its varieties, Western and Oriental, which distinguishes it from other cultural phenomena?

Religion involves a certain kind of outlook on the world and a consequent way of life. In other words, the term “religion” denotes a type (actually, a precursor) of philosophy. As such, a religion must include a view of knowledge (which is the subject matter of the branch of philosophy called epistemology) and a view of reality (metaphysics). Then, on this foundation, a religion builds a code of values (ethics). So the question becomes: what type of philosophy constitutes a religion?

The Oxford English Dictionary defines “religion” as “a particular system of faith and worship,” and goes on, in part: “Recognition on the part of man of some higher unseen power as having control of his destiny, and as being entitled to obedience, reverence, and worship.”

The fundamental concept here is “faith.” “Faith” in this context means belief in the absence of evidence. This is the essential that distinguishes religion from science. A scientist may believe in entities which he cannot observe, such as atoms or electrons, but he can do so only if he proves their existence logically, by inference from the things he does observe. A religious man, however, believes in “some higher unseen power” which he cannot observe and cannot logically prove. As the whole history of philosophy demonstrates, no study of the natural universe can warrant jumping outside it to a supernatural entity. The five arguments for God offered by the greatest of all religious thinkers, Thomas Aquinas, are widely recognized by philosophers to be logically defective; they have each been refuted many times, and they are the best arguments that have ever been offered on this subject.

Many philosophers indeed now go further: they point out that God not only is an article of faith, but that this is essential to religion. A God susceptible of proof, they argue, would actually wreck religion. A God open to human logic, to scientific study, to rational understanding, would have to be definable, delimited, finite, amenable to human concepts, obedient to scientific law, and thus incapable of miracles. Such a thing would be merely one object among others within the natural world; it would be merely another datum for the scientist, like some new kind of galaxy or cosmic ray, not a transcendent power running the universe and demanding man’s worship. What religion rests on is a true God, i.e., a God not of reason, but of faith.

If you want to concretize the idea of faith, I suggest that you visit, of all places, the campuses of the Ivy League, where, according to the New York Times, a religious revival is now occurring. Will you find students eagerly discussing proofs or struggling to reinterpret the ancient myths of the Bible into some kind of consistency with the teachings of science? On the contrary. The students, like their parents, are insisting that the Bible be accepted as literal truth, whether it makes logical sense or not. “Students today are more reconciled to authority,” one campus religious official notes. “There is less need for students to sit on their own mountaintop” — i.e., to exercise their own independent minds and judgment. Why not? They are content simply to believe. At Columbia University, for instance, a new student group gathers regularly not to analyze, but “to sing, worship, and speak in tongues.” “People are coming back to religion in a way that some of us once went to the counterculture,’’ says a chaplain at Columbia. 4 The New York Times, Dec. 25, 1985 and Jan. 5, 1986. This is absolutely true. And note what they are coming back to: not reason or logic, but faith.

“Faith” names the method of religion, the essence of its epistemology; and, as the Oxford English Dictionary states, the belief in “some higher unseen power” is the basic content of religion, its distinctive view of reality, its metaphysics. This higher power is not always conceived as a personal God; some religions construe it as an impersonal dimension of some kind. The common denominator is the belief in the supernatural — in some entity, attribute, or force transcending and controlling this world in which we live.

According to religion, this supernatural power is the essence of the universe and the source of all value. It constitutes the realm of true reality and of absolute perfection. By contrast, the world around us is viewed as only semi-real and as inherently imperfect, even corrupt, in any event metaphysically unimportant. According to most religions, this life is a mere episode in the soul’s journey to its ultimate fulfillment, which involves leaving behind earthly things in order to unite with Deity. As a pamphlet issued by a Catholic study group expresses this point: Man “cannot achieve perfection or true happiness in this life here on earth. He can only achieve this in the eternity of the next life after death. . . . Therefore . . . what a person has or lacks in terms of worldly possessions, privileges or advantages is not important.’’ 5 “What the Catholic Church Teaches About Socialism, Communism, and Marxism,” The Catholic Study Council, Washington, DC. In New Delhi a few months ago, expressing this viewpoint, Pope John Paul II urged on the Indians a life of “asceticism and renunciation.” In Quebec some time earlier, he decried “the fascination the modern world feels for productivity, profit, efficiency, speed, and records of physical strength.” Too many men, he explained in Luxembourg, “consciously organize their way of life merely on the basis of the realities of this world without any heed for God and His wishes.’’ 6 The New York Times, Feb. 2, 1986, Sept. 11, 1984, and May 17, 1985.

This brings us to religious ethics, the essence of which also involves faith, faith in God’s commandments. Virtue, in this view, consists of obedience. Virtue is not a matter of achieving your desires, whatever they may be, but of seeking to carry out God’s; it is not the pursuit of egoistic goals, whether rational or not, but the willingness to renounce your own goals in the service of the Lord. What religion counsels is the ethics of self-transcendence, self-abnegation, self-sacrifice.

What single attitude most stands in the way of this ethics, according to religious writers? The sin of pride. Why is pride a sin? Because man, in this view, is a metaphysically defective creature. His intellect is helpless in the crucial questions of life. His will has no real power over his existence, which is ultimately controlled by God. His body lusts after all the temptations of the flesh. In short, man is weak, ugly, and low, a typical product of the low, unreal world in which he lives. Your proper attitude toward yourself, therefore, as to this world, should be a negative one. For earthly creatures such as you and I, “Know thyself’ means “Know thy worthlessness”; simple honesty entails humility, self-castigation, even self-disgust.

Religion means orienting one’s existence around faith, God, and a life of service — and correspondingly of downgrading or condemning four key elements: reason, nature, the self, and man. Religion cannot be equated with values or morality or even philosophy as such; it represents a specific approach to philosophic issues, including a specific code of morality.

What effect does this approach have on human life? We do not have to answer by theoretical deduction, because Western history has been a succession of religious and unreligious periods. The modern world, including America, is a product of two of these periods: of Greco-Roman civilization and of medieval Christianity. So, to enable us to understand America, let us first look at the historical evidence from these two periods; let us look at their stand on religion and at the practical consequences of this stand. Then we will have no trouble grasping the base and essence of the United States.

Ancient Greece was not a religious civilization, not on any of the counts I mentioned. The gods of Mount Olympus were like a race of elder brothers to man, mischievous brothers with rather limited powers; they were closer to Steven Spielberg’s extraterrestrial visitor than to anything we would call “God.” They did not create the universe or shape its laws or leave any message of revelations or demand a life of sacrifice. Nor were they taken very seriously by the leading voices of the culture, such as Plato and Aristotle. From start to finish, the Greek thinkers recognized no sacred texts, no infallible priesthood, no intellectual authority beyond the human mind; they allowed no room for faith. Epistemologically, most were staunch individualists who expected each man to grasp the truth by his own powers of sensory observation and logical thought. For details, I refer you to Aristotle, the preeminent representative of the Greek spirit.

Metaphysically, as a result, Greece was a secular culture. Men generally dismissed or downplayed the supernatural; their energies were devoted to the joys and challenges of life. There was a shadowy belief in immortality, but the dominant attitude to it was summed up by Homer, who has Achilles declare that he would rather be a slave on earth than “bear sway among all the dead that be departed.”

The Greek ethics followed from this base. All the Greek thinkers agreed that virtue is egoistic. The purpose of morality, in their view, is to enable a man to achieve his own fulfillment, his own happiness, by means of a proper development of his natural faculties — above all, of his cognitive faculty, his intellect. And as to the Greek estimate of man — look at the statues of the Greek gods, made in the image of human strength, human grace, human beauty; and read Aristotle’s account of the virtue — yes, the virtue — of pride.

I must note here that in many ways Plato was an exception to the general irreligion of the Greeks. But his ideas were not dominant until much later. When Plato’s spirit did take over, the Greek approach had already died out. What replaced it was the era of Christianity.

Intellectually speaking, the period of the Middle Ages was the exact opposite of classical Greece. Its leading philosophic spokesman, Augustine, held that faith was the basis of man’s entire mental life. “I do not know in order to believe,” he said, “I believe in order to know.” In other words, reason is nothing but a handmaiden of revelation; it is a mere adjunct of faith, whose task is to clarify, as far as possible, the dogmas of religion. What if a dogma cannot be clarified? So much the better, answered an earlier Church father, Tertullian. The truly religious man, he said, delights in thwarting his reason; that shows his commitment to faith. Thus Tertullian’s famous answer, when asked about the dogma of God’s self-sacrifice on the cross: “Credo quia absurdum” (“I believe it because it is absurd”).

As to the realm of physical nature, the medievals characteristically regarded it as a semi-real haze, a transitory stage in the divine plan, and a troublesome one at that, a delusion and a snare — a delusion because men mistake it for reality, a snare because they are tempted by its lures to jeopardize their immortal souls. What tempts them is the prospect of earthly pleasure.

What kind of life, then, does the immortal soul require on earth? Self-denial, asceticism, the resolute shunning of this temptation. But isn’t it unfair to ask men to throw away their whole enjoyment of life? Augustine’s answer is: what else befits creatures befouled by original sin, creatures who are, as he put it, “crooked and sordid, bespotted and ulcerous”?

What were the practical results — in the ancient world, then in the medieval — of these two opposite approaches to life?

Greece created philosophy, logic, science, mathematics, and a magnificent, man-glorifying art; it gave us the base of modern civilization in every field; it taught the West how to think. In addition, through its admirers in ancient Rome, which built on the Greek intellectual base, Greece indirectly gave us the rule of law and the first idea of man’s rights (this idea was originated by the pagan Stoics). Politically, the ancients never conceived a society of full-fledged individual liberty; no nation achieved that before the United States. But the ancients did lay certain theoretical bases for the concept of liberty; and in practice, both in some of the Greek city-states and in republican Rome, large numbers of men at various times were at least relatively free. They were incomparably more free than their counterparts ever had been in the religious cultures of ancient Egypt and its equivalents.

What were the practical results of the medieval approach? The Dark Ages were dark on principle. Augustine fought against secular philosophy, science, art; he regarded all of it as an abomination to be swept aside; he cursed science in particular as “the lust of the eyes.” Unlike many Americans today, who drive to church in their Cadillac or tape their favorite reverend on the VCR so as not to interrupt their tennis practice, the medievals took religion seriously. They proceeded to create a society that was antimaterialistic and anti-intellectual. I do not have to remind you of the lives of the saints, who were the heroes of the period, including the men who ate only sheep’s gall and ashes, quenched their thirst with laundry water, and slept with a rock for their pillow. These were men resolutely defying nature, the body, sex, pleasure, all the snares of this life — and they were canonized for it, as, by the essence of religion, they should have been. The economic and social results of this kind of value code were inevitable: mass stagnation and abject poverty, ignorance and mass illiteracy, waves of insanity that swept whole towns, a life expectancy in the teens. “Woe unto ye who laugh now,” the Sermon on the Mount had said. Well, they were pretty safe on this count. They had precious little to laugh about.

What about freedom in this era? Study the existence of the feudal serf tied for life to his plot of ground, his noble overlord, and the all-encompassing decrees of the Church. Or, if you want an example closer to home, jump several centuries forward to the American Puritans, who were a medieval remnant transplanted to a virgin continent, and who proceeded to establish a theocratic dictatorship in colonial Massachusetts. Such a dictatorship, they declared, was necessitated by the very nature of their religion. You are owned by God, they explained to any potential dissenter; therefore, you are a servant who must act as your Creator, through his spokesmen, decrees. Besides, they said, you are innately depraved, so a dictatorship of the elect is necessary to ride herd on your vicious impulses. And, they said, you don’t really own your property either; wealth, like all values, is a gift from Heaven temporarily held in trust, to be controlled, like all else, by the elect. And if all this makes you unhappy, they ended up, so what? You’re not supposed to pursue happiness in this life anyway.

There can be no philosophic breach between thought and action. The consequence of the epistemology of religion is the politics of tyranny. If you cannot reach the truth by your own mental powers, but must offer obedient faith to a cognitive authority, then you are not your own intellectual master; in such a case, you cannot guide your behavior by your own judgment, either, but must be submissive in action as well. This is the reason why, historically — as Ayn Rand has pointed out — faith and force are always corollaries; each requires the other.

The early Christians did contribute some good ideas to the world, ideas that proved important to the cause of future freedom. I must, so to speak, give the angels their due. In particular, the idea that man has value as an individual — that the individual soul is precious — is essentially a Christian legacy to the West; its first appearance was in the form of the idea that every man, despite original sin, is made in the image of God (as against the pre-Christian notion that a certain group or nation has a monopoly on human value, while the rest of mankind are properly slaves or mere barbarians). But notice a crucial point: this Christian idea, by itself, was historically impotent. It did nothing to unshackle the serfs or stay the Inquisition or turn the Puritan elders into Thomas Jeffersons. Only when the religious approach lost its power — only when the idea of individual value was able to break free from its Christian context and become integrated into a rational, secular philosophy — only then did this kind of idea bear practical fruit.

What — or who — ended the Middle Ages? My answer is: Thomas Aquinas, who introduced Aristotle, and thereby reason, into medieval culture. In the thirteenth century, for the first time in a millennium, Aquinas reasserted in the West the basic pagan approach. Reason, he said in opposition to Augustine, does not rest on faith; it is a self-contained, natural faculty, which works on sense experience. Its essential task is not to clarify revelation, but rather, as Aristotle had said, to gain knowledge of this world. Men, Aquinas declared forthrightly, must use and obey reason; whatever one can prove by reason and logic, he said, is true. Aquinas himself thought that he could prove the existence of God, and he thought that faith is valuable as a supplement to reason. But this did not alter the nature of his revolution. His was the charter of liberty, the moral and philosophical sanction, which the West had desperately needed. His message to mankind, after the long ordeal of faith, was in effect: “It is all right. You don’t have to stifle your mind anymore. You can think.”

The result, in historical short order, was the revolt against the authority of the Church, the feudal breakup, the Renaissance. Renaissance means “rebirth,” the rebirth of reason and of man’s concern with this world. Once again, as in the pagan era, we see secular philosophy, natural science, man-glorifying art, and the pursuit of earthly happiness. It was a gradual, tortuous change, with each century becoming more worldly than the preceding, from Aquinas to the Renaissance to the Age of Reason to the climax and end of this development: the eighteenth century, the Age of Enlightenment. This was the age in which America’s founding fathers were educated and in which they created the United States.

The Enlightenment represented the triumph (for a short while anyway) of the pagan Greek, and specifically of the Aristotelian, spirit. Its basic principle was respect for man’s intellect and, correspondingly, the wholesale dismissal of faith and revelation. Reason the Only Oracle of Man, said Ethan Allen of Vermont, who spoke for his age in demanding unfettered free thought and in ridiculing the primitive contradictions of the Bible. “While we are under the tyranny of Priests,” he declared in 1784, “. . . it ever will be their interest, to invalidate the law of nature and reason, in order to establish systems incompatible therewith.’’ 7 From Reason the Only Oracle of Man (Bennington: 1784), p. 457.

Elihu Palmer, another American of the Enlightenment, was even more outspoken. According to Christianity, he writes, God “is supposed to be a fierce, revengeful tyrant, delighting in cruelty, punishing his creatures for the very sins which he causes them to commit; and creating numberless millions of immortal souls, that could never have offended him, for the express purpose of tormenting them to all eternity.” The purpose of this kind of notion, he says elsewhere, “the grand object of all civil and religious tyrants . . . has been to suppress all the elevated operations of the mind, to kill the energy of thought, and through this channel to subjugate the whole earth for their own special emolument.” “It has hitherto been deemed a crime to think,” he observes, but at last men have a chance — because they have finally escaped from the “long and doleful night” of Christian rule, and have grasped instead “the unlimited power of human reason” — “reason, which is the glory of our nature.” 8 The Examiners Examined: Being a Defence of the Age of Reason (New York: 1794), pp. 9–10. An Enquiry Relative to the Moral and Political Improvement of the Human Species (London: 1826), p. 35. Principles of Nature (New York: 1801), from chap. 1 and chap. 22.

Allen and Palmer are extreme representatives of the Enlightenment spirit, granted; but they are representatives. Theirs is the attitude which was new in the modern world, and which, in a less inflammatory form, was shared by all the founding fathers as their basic, revolutionary premise. Thomas Jefferson states the attitude more sedately, with less willful provocation to religion, but it is the same essential attitude. “Fix reason firmly in her seat,” he advises a nephew, “and call to her tribunal every fact, every opinion. Question with boldness even the existence of a God; because, if there be one, he must more approve of the homage of reason, than that of blindfolded fear.’’ 9 Writings, A. E. Bergh, ed. (Washington, DC: 1903), vol. 6, p. 258. Observe the philosophic priorities in this advice: man’s mind comes first; God is a derivative, if you can prove him. The absolute, which must guide the human mind, is the principle of reason; every other idea must meet this test. It is in this approach — in this fundamental rejection of faith — that the irreligion of the Enlightenment lies.

The consequence of this approach was the age’s rejection of all the other religious priorities. In metaphysics: this world once again was regarded as real, as important, and as a realm not of miracles, but of impersonal, natural law. In ethics: success in this life became the dominant motive; the veneration of asceticism was swept aside in favor of each man’s pursuit of happiness — his own happiness on earth, to be achieved by his own effort, by self-reliance and self-respect leading to self-made prosperity. But can man really achieve fulfillment on earth? Yes, the Enlightenment answered; man has the means, the potent faculty of intellect, necessary to achieve his goals and values. Man may not yet be perfect, people said, but he is perfectible; he must be so, because he is the rational animal.

Such were the watchwords of the period: not faith, God, service, but reason, nature, happiness, man.

Many of the founding fathers, of course, continued to believe in God and to do so sincerely, but it was a vestigial belief, a leftover from the past which no longer shaped the essence of their thinking. God, so to speak, had been kicked upstairs. He was regarded now as an aloof spectator who neither responds to prayer nor offers revelations nor demands immolation. This sort of viewpoint, known as deism, cannot, properly speaking, be classified as a religion. It is a stage in the atrophy of religion; it is the step between Christianity and outright atheism.

This is why the religious men of the Enlightenment were scandalized and even panicked by the deist atmosphere. Here is the Rev. Peter Clark of Salem, Mass., in 1739: “The former Strictness in Religion, that . . . Zeal for the Order and Ordinances of the Gospel, which was so much the Glory of our Fathers, is very much abated, yea disrelished by too many: and a Spirit of Licentiousness, and Neutrality in Religion . . . so opposite to the Ways of God’s People, do exceedingly prevail in the midst of us.” 10 A Sermon Preach’d . . . May 30th, 1739 (Boston: 1739), p. 40. And here, fifty years later, is the Rev. Charles Backus of Springfield, Mass. The threat to divine religion, he says, is the “indifference which prevails” and the “ridicule.” Mankind, he warns, is in “great danger of being laughed out of religion.” 11 A Sermon Preached in Long-Meadow at the Publick Fast (Springfield:1788). This was true; these preachers were not alarmists; their description of the Enlightenment atmosphere is correct.

This was the intellectual context of the American Revolution. Point for point, the founding fathers’ argument for liberty was the exact counterpart of the Puritans’ argument for dictatorship — but in reverse, moving from the opposite starting point to the opposite conclusion. Man, the founding fathers said in essence (with a large assist from Locke and others), is the rational being; no authority, human or otherwise, can demand blind obedience from such a being — not in the realm of thought or, therefore, in the realm of action, either. By his very nature, they said, man must be left free to exercise his reason and then to act accordingly, i.e., by the guidance of his best rational judgment. Because this world is of vital importance, they added, the motive of man’s action should be the pursuit of happiness. Because the individual, not a supernatural power, is the creator of wealth, a man should have the right to private property, the right to keep and use or trade his own product. And because man is basically good, they held, there is no need to leash him; there is nothing to fear in setting free a rational animal.

This, in substance, was the American argument for man’s inalienable rights. It was the argument that reason demands freedom. And this is why the nation of individual liberty, which is what the United States was, could not have been founded in any philosophically different century. It required what the Enlightenment offered: a rational, secular context.

When you look for the source of an historic idea, you must consider philosophic essentials, not the superficial statements or errors that people may offer you. Even the most well-meaning men can misidentify the intellectual roots of their own attitudes. Regrettably, this is what the founding fathers did in one crucial respect. All men, said Jefferson, are endowed “by their Creator” with certain unalienable rights, a statement that formally ties individual rights to the belief in God. Despite Jefferson’s eminence, however, his statement (along with its counterparts in Locke and others) is intellectually unwarranted. The principle of individual rights does not derive from or depend on the idea of God as man’s creator. It derives from the very nature of man, whatever his source or origin; it derives from the requirements of man’s mind and his survival. In fact, as I have argued, the concept of rights is ultimately incompatible with the idea of the supernatural. This is true not only logically, but also historically. Through all the centuries of the Dark and Middle Ages, there was plenty of belief in a Creator; but it was only when religion began to fade that the idea of God as the author of individual rights emerged as an historical, nation-shaping force. What then deserves the credit for the new development — the age-old belief or the new philosophy? What is the real intellectual root and protector of human liberty — God or reason?

My answer is now evident. America does rest on a code of values and morality — in this, the New Right is correct. But, by all the evidence of philosophy and history, it does not rest on the values or ideas of religion. It rests on their opposite.

You are probably wondering here: “What about Communism? Isn’t it a logical, scientific, atheistic philosophy, and yet doesn’t it lead straight to totalitarianism?” The short answer to this is: Communism is not an expression of logic or science, but the exact opposite. Despite all its anti-religious posturings, Communism is nothing but a modern derivative of religion: it agrees with the essence of religion on every key issue, then merely gives that essence a new outward veneer or cover-up.

The Communists reject Aristotelian logic and Western science in favor of a “dialectic” process; reality, they claim, is a stream of contradictions which is beyond the power of “bourgeois’’ reason to understand. They deny the very existence of man’s mind, claiming that human words and actions reflect nothing but the alogical, predetermined churnings of blind matter. They do reject God, but they replace him with a secular stand-in, Society or the State, which they treat not as an aggregate of individuals, but as an unperceivable, omnipotent, supernatural organism, a “higher unseen power” transcending and dwarfing all individuals. Man, they say, is a mere social cog or atom, whose duty is to revere this power and to sacrifice everything in its behalf. Above all, they say, no such cog has the right to think for himself; every man must accept the decrees of Society’s leaders, he must accept them because that is the voice of Society, whether he understands it or not. Fully as much as Tertullian, Communism demands faith from its followers and subjects, “faith” in the literal, religious sense of the term. On every count, the conclusion is the same: Communism is not a new, rational philosophy; it is a tired, slavishly imitative heir of religion.

This is why, so far, Communism has been unable to win out in the West. Unlike the Russians, we have not been steeped enough in religion — in faith, sacrifice, humility and, therefore, in servility. We are still too rational, too this-worldly, and too individualistic to submit to naked tyranny. We are still being protected by the fading remnants of our Enlightenment heritage.

But we will not be so for long if the New Right has its way.

Philosophically, the New Right holds the same fundamental ideas as the New Left — its religious zeal is merely a variant of irrationalism and the demand for self-sacrifice — and therefore it has to lead to the same result in practice: dictatorship. Nor is this merely my theoretical deduction. The New Rightists themselves announce it openly. While claiming to be the defenders of Americanism, their distinctive political agenda is pure statism.

The outstanding example of this fact is their insistence that the state prohibit abortion even in the first trimester of pregnancy. A woman, in this view, has no right to her own body or even, the most consistent New Rightists add, to her own life; instead, she should be made to sacrifice at the behest of the state, to sacrifice her desires, her life goals, and even her existence in the name of a mass of protoplasm which is at most a potential human being, not an actual one. “Abortion,” says Paul Weyrich, executive director of the Committee for the Survival of a Free Congress, “is wrong in all cases. I believe that if you have to choose between new life and existing life, you should choose new life. The person who has had an opportunity to live at least has been given that gift by God and should make way for a new life on earth.” 12 From “Sex and God in American Politics,” op. cit.

Another example: men and women, the New Right tells us, should not be free to conduct their sexual or romantic lives in private, in accordance with their own choice and values; the law should prohibit any sexual practices condemned by religion. And: children, we are told, should be indoctrinated with state-mandated religion at school. For instance, biology texts should be rewritten under government tutelage to present the Book of Genesis as a scientific theory on a par with or even superior to the theory of evolution. And, of course, the ritual of prayer must be forced down the children’s throats. Is this not, contrary to the Constitution, a state establishment of religion, and of a controversial, intellectual viewpoint? Not at all, says Jack Kemp. “If a prayer is said aloud,” he explains, “it need be no more than a general acknowledgment of the existence, power, authority, and love of God, the Creator.” 13 Ibid. That’s all — nothing controversial or indoctrinating about that!

And: when the students finally do leave school, after all the indoctrination, can they then be trusted to deal with intellectual matters responsibly? No, says the New Right. Adults should not be free to write, to publish, or to read, according to their own judgment; literature should be censored by the state according to a religious standard of what is fitting as against obscene.

Is this a movement in behalf of Americanism and individual rights? Is it a movement consistent with the principles of the Constitution?

“The Constitution established freedom for religion,” says Mr. Kemp, “not from it” — a sentiment which is shared by President Reagan and by the whole New Right. 14 “Jack Kemp at Liberty Baptist,” Policy Review, Spring 1984. What then becomes of intellectual freedom? Are meetings such as this evening’s deprived of constitutional protection, since the viewpoint I am propounding certainly does not come under “freedom for religion’’? And what happens when one religious sect concludes that the statements of another are subversive of true religion? Who decides which, if either, should be struck down by the standard of “freedom for religion, not from it”? Can you predict the fate of free thought, and of “life, liberty and the pursuit of happiness,” if Mr. Kemp and associates ever get their hands fully on the courts and the Congress?

What we are seeing is the medievalism of the Puritans all over again, but without their excuse of ignorance. We are seeing it on the part of modern Americans, who live not before the founding fathers’ heroic experiment in liberty, but after it.

The New Right is not the voice of Americanism. It is the voice of thought control attempting to take over in this country and pervert and undo the actual American Revolution.

But, you may say, aren’t the New Rightists at least champions of property rights and capitalism, as against the economic statism of the liberals? They are not. Capitalism is the separation of state and economics, a condition that none of our current politicians or pressure groups even dreams of advocating. The New Right, like all the rest on the political scene today, accepts the welfare-state mixed economy created by the New Deal and its heirs; our conservatives now merely haggle on the system’s fringes about a particular regulation or handout they happen to dislike. In this matter, the New Right is moved solely by the power of tradition. These men do not want to achieve any change of basic course, but merely to slow down the march to socialism by freezing the economic status quo. And even in regard to this highly limited goal, they are disarmed and useless.

If you want to know why, I refer you to the published first drafts of the [1986] pastoral letter of the U.S. Catholic bishops, men who are much more consistent and philosophical than anyone in the New Right. The bishops recommend a giant step in the direction of socialism. They ask for a vast new government presence in our economic life, overseeing a vast new redistribution of wealth in order to aid the poor, at home and abroad. They ask for it on a single basic ground: consistency with the teachings of Christianity.

Some of you may wonder here: “But if the bishops are concerned with the poor, why don’t they praise and recommend capitalism, the great historical engine of productivity, which makes everyone richer?” If you think about it, however, you will see that, valid as this point may be, the bishops cannot accept it.

Can they praise the profit motive — while extolling selflessness? Can they commend the passion to own material property — while declaring that worldly possessions are not important? Can they urge men to practice the virtues of productiveness and long-range planning — while upholding as the human model the lilies of the field? Can they celebrate the self-assertive risk taking of the entrepreneur — while teaching that the meek shall inherit the earth? Can they glorify and liberate the creative ingenuity of the human mind, which is the real source of material wealth — while elevating faith above reason? The answers are obvious. Regardless of the unthinking pretenses of the New Right, no religion, by its nature, can appeal to or admire the capitalist system; not if the religion is true to itself. Nor can any religion liberate man’s power to create new wealth. If, therefore, the faithful are concerned about poverty — as the Bible demands they be — they have no alternative but to counsel a redistribution of whatever wealth already happens to have been produced. The goods, they have to say, are here. How did they get here? God, they reply, has seen to that; now let men make sure that His largesse is distributed fairly. Or, as the bishops put it: “The goods of this earth are common property and . . . men and women are summoned to faithful stewardship rather than to selfish appropriation or exploitation of what was destined for all.” 15 Catholic Social Teaching and the U.S. Economy (First Draft); in Origins, NC Documentary Service, vol. 14, no. 22/23, Nov. 15, 1984, p. 344.

For further details on this point, I refer you to the bishops’ letter; given their premises, their argument is unanswerable. If, as the New Right claims, there is scriptural warrant for state control of men’s sexual activities, then there is surely much more such warrant for state control of men’s economic activities. The idea of the Bible (or the “Protestant ethic”) as the base of capitalism is ludicrous, both logically and historically.

Economically, as in all other respects, the New Right is leading us, admittedly or not, to the same end as its liberal opponents. By virtue of the movement’s essential premises, it is supporting and abetting the triumph of statism in this country — and, therefore, of Communism in the world at large. When a free nation betrays its own heritage, it has no heart left, no conviction by means of which to stand up to foreign aggressors.

There was a flaw in the intellectual foundations of America from the start: the attempt to combine the Enlightenment approach in politics with the Judeo-Christian ethics. For a while, the latter element was on the defensive, muted by the eighteenth-century spirit, so that America could gain a foothold, grow to maturity, and become great. But only for a while. Thanks to Immanuel Kant, as I have discussed in my book The Ominous Parallels, the base of religion — faith and self-sacrifice — was reestablished at the turn of the nineteenth century. Thereafter, all of modern philosophy embraced collectivism, in the form of socialism, Fascism, Communism, welfare statism. By now, the distinctive ideas at the base of America have been largely forgotten or swept aside. They will not be brought back by an appeal to religion.

What then is the solution? It is not atheism as such — and I say this even though as an Objectivist I am an atheist. “Atheism” is a negative; it means not believing in God — which leaves wide open what you do believe in. It is futile to crusade merely for a negative; the Communists, too, call themselves atheists. Nor is the answer “secular humanism,” about which we often hear today. This term is used so loosely that it is practically contentless; it is compatible with a wide range of conflicting viewpoints, including, again, Communism. To combat the doctrines that are destroying our country, out-of-context terms and ideas such as these are useless. What we need is an integrated, consistent philosophy in every branch, and especially in the two most important ones: epistemology and ethics. We need a philosophy of reason and of rational self-interest, a philosophy that would once again release the power of man’s mind and the energy inherent in his pursuit of happiness. Nothing less will save America or individual rights.

There are many good people in the world who accept religion, and many of them hold some good ideas on social questions. I do not dispute that. But their religion is not the solution to our problem; it is the problem. Do I say therefore that there should only be “freedom for atheism”? No, I am not Mr. Kemp. Of course, religions must be left free; no philosophic viewpoint, right or wrong, should be interfered with by the state. I do say, however, that it is time for patriots to take a stand — to name publicly what America does depend on, and why that is not Judaism or Christianity.

There are men today who advocate freedom and who recognize what ideas lie at its base, but who then counsel “practicality.’’ It is too late, they say, to educate people philosophically; we must appeal to what they already believe; we must pretend to endorse religion on strategic grounds, even if privately we don’t.

This is a counsel of intellectual dishonesty and of utter impracticality. It is too late indeed, far too late for a strategy of deception which by its nature has to backfire and always has, because it consists of affirming and supporting the very ideas that have to be uprooted and replaced. It is time to tell people the unvarnished truth: to stand up for man’s mind and this earth, and against any version of mysticism or religion. It is time to tell people: “You must choose between unreason and America. You cannot have both. Take your pick.”

If there is to be any chance for the future, this is the only chance there is.

My Thirty Years with Ayn Rand: An Intellectual Memoir

This lecture was delivered at Boston’s Ford Hall Forum on April 26, 1987, published in The Objectivist Forum in June 1987 and anthologized in The Voice of Reason: Essays in Objectivist Thought in 1989.

Ayn Rand was unique — as a mind and as a person. If I could be granted a wish outside my power, it would be to meet and talk to someone like her again; unfortunately, I do not expect this wish to come true. The root of her uniqueness, which I had abundant opportunity to experience and enjoy in my thirty-year friendship with her, was the nature of her mental processes.

The purpose of this intellectual memoir is not to report on the content of the ideas I learned from Ayn Rand — whoever knows her books knows that already — but on her method of thinking as I observed it, her approach to the whole realm of ideas and therefore of living, her basic way of functioning cognitively in any situation. Method is fundamental; it is that which underlies and shapes content and thus all human achievement, in every field. Ayn Rand’s method of thinking is an eloquent case in point: it is the root of her genius and of her distinctive art and philosophy. The mental processes she used in everyday life, from adolescence on, were the processes that led her, one step at a time, to all of her brilliant insights and to the principles of Objectivism.

Because of the role of method in human life, I have often thought that the greatest humanitarian service I could perform would be to leave the world a record and analysis of Ayn Rand’s mind and how it worked. In the present discussion, I can offer you at least a glimpse of what I was privileged to see. Near the end, I will say something less epistemological — about Ayn Rand as a person.

When I met Ayn Rand, in the spring of 1951, I was an ignorant, intelligent seventeen-year-old, an admirer of The Fountainhead, but one who knew nothing about philosophy or how to think. Ayn Rand brought me up intellectually. In the nature of the case, therefore, some of my reminiscences are going to cast me in the role of naïve foil exhibiting her brilliance by contrast. This implication does not bother me, however, because alongside my confusions and errors, I claim one offsetting virtue: I did finally learn and come to practice what Ayn Rand taught me.

The strongest first impression I had of Ayn Rand on the fateful evening I met her — fateful to my life — was her passion for ideas. I have never seen its equal. I came to her California home that evening with a few broad questions suggested to me by The Fountainhead. One pertained to the issue of the moral and the practical, attributes which I had always been told were opposites. The character of Howard Roark, therefore, puzzled me, because he seemed to be both at once. So I asked Ayn Rand to tell me which one she intended him to represent. This was the sort of issue — relating to the nature of ideals and their role in life — which I had tried now and then, without much success, to discuss with family or teachers. Such issues were usually dismissed by the people I knew with a bromide or a shrug, amounting to the declaration: “Who knows and who cares?” Ayn Rand knew and Ayn Rand cared.

From the moment we started talking, she was vibrant, alert, alive. She listened intently to my words, she extracted every drop of meaning and of confusion, and then she answered. She spoke at length, first considering the question as I phrased it, then the deeper implications she saw in it. At each step, she explained what were the facts supporting her viewpoint, what kinds of objections might occur to me later if I pursued the topic, and what was the logical reply to them. She never suggested that I accept what she said on her say-so; on the contrary, she was working diligently to get me to see the truth with my own eyes and mind. The result was a brilliant extemporaneous dissertation on man’s need of morality and therefore on the unity of the moral and the practical — in Roark and in any rational person — along with an eloquent demonstration of the disasters caused by the conventional viewpoint.

I was astonished not only by the originality of her ideas, but even more by her manner. She spoke as though it were urgent that I understand the issue and that she forestall every possible misinterpretation on my part. She was wringing out of herself every ounce of clarity she had. I have seen men lecturing to solemn halls of graduate students, and men running for national office, dealing in the most literal sense with issues of life and death; but I have never seen anyone work as hard as she did to be fully understood, down to the root. Yet she was doing it in a drawing room, in answer to a question from a boy she had just met. Clearly, it was not the boy who primarily inspired her; it was the subject (though she would not have answered as she did if she had doubted my sincerity).

Ayn Rand’s performance that evening opened up the world to me. She made me think for the first time that thinking is important. I said to myself after I left her home: “All of life will be different now. If she exists, everything is possible.”

As long as I knew Ayn Rand, her passion for ideas never abated. As a rule, she wrote in her office daily from noon until 6:30, and she often came out looking exhilarated but utterly spent. But then if I or someone else would drop over and make an intellectual observation or ask a question, she was suddenly, dramatically invigorated, and it might very well be midnight before she realized that she hadn’t yet eaten dinner. A day or even an hour spent on legal contracts, or on business phone calls, or on shopping, or on having her hair done, tired her out thoroughly. But philosophy — ideas — was the stimulant that always brought her back.

She had such a passion for ideas because she thought that ideas are practical — that they are the most practical things in the world. In this regard, her approach was the opposite of that which philosophers call “rationalism.” “Rationalism” amounts to the viewpoint that ideas are detached from reality, unrelated to daily events, and without significance for man’s actual life — that they are nothing but floating abstractions to be manipulated by ivory-tower intellectuals for their own amusement, just as other men manipulate chess pieces. This viewpoint dominates twentieth-century thinkers. When I went to college, I routinely heard philosophical theories being discussed or debated by my professors as a purely academic matter. One professor was a follower of Immanuel Kant, say, another was an opponent of Kant, but they spoke and acted as though nothing separated them but dry, technical differences. After the debate, the two would go off arm in arm, buddies in spirit who had just finished a game or a show and were now returning to the real world. It reminds me of the logical positivist I heard about years ago who gave a lecture on why the word “God” is meaningless, then asked for directions to the nearest synagogue so he could say his prayers. The man was surprised that anyone was surprised by his request. “What has philosophy got to do with living?” he asked indignantly.

After a few weeks of classes with such professors, I would come running to Ayn Rand, chock-full of sophistry and fallacies, and she would spend twelve or even fifteen unbroken hours struggling to straighten out my thinking again. Why did it matter so much to her? Because her own mental practice was the antithesis of rationalism. To continue the same example, I remember asking her once long ago why she was so vehement in denouncing Kant’s theories, particularly the abstract ideas at the base of his system, such as his view that the world we perceive by our senses and mind is not real, but is only a creation of man’s subjective forms of awareness. I knew that Kant was wrong, but I did not understand at the age of twenty why the issue evoked in her so strong an emotion.

She replied, in essence: “When someone says that reality is unreal or that reason is subjective, he is, admittedly or not, attacking every conviction and every value I hold. Everything I love in life — my work, my husband, my kind of music, my freedom, the creativity of man’s mind — all of it rests on my perception of reality; all of it becomes a delusion and an impossibility if reason is impotent. Once you concede Kant’s kind of approach, you unleash the destroyers among men, the creatures who, freed of the need to be rational, will proceed — as in fact they have done since Kant — to expropriate the producers, sacrifice all values, and throw the rest of us into a fascist or communist dictatorship.”

If you went up to an ordinary individual, itemized every object and person he cared for, then said to him seriously: “I intend to smash them all and leave you groveling in the muck,” he would become indignant, even outraged. What set Ayn Rand apart from mankind is the fact that she heard the whole itemization and the intention to smash everything in the simple statement that “reality is unreal.” Most people in our age of pragmatism and skepticism shrug off broad generalizations about reality as mere talk — i.e., as floating abstractions — and react only to relatively narrow utterances. Ayn Rand was the reverse. She reacted much more intensely to philosophical ideas than to narrow concretes. The more abstract an evil formulation, the more territory it covered, and the greater, therefore, the destructive potential she saw in it.

By the same token, if Ayn Rand heard a basic idea that she regarded as true — an idea upholding reality and reason, like many of the principles of Aristotle — she responded with profound respect, admiration, even gratitude. Ideas to her were not a parlor game. They were man’s form of grasping the world, and they were thus an essential of human action and survival. So true ideas were an invaluable asset, and false ones a potential disaster.

Just as Ayn Rand did not detach abstractions from concretes, so she did not allow concretes to remain detached from abstractions. That is, she rejected today’s widespread policy of staring at daily events in a vacuum, then wailing that life is unintelligible. What a man does, she held, is a product of what he thinks. To be understood, therefore, a man’s actions have to be seen in relation to his ideas. Whether she encountered an inspiring novel by Victor Hugo, accordingly, or some horror spawned by Progressive education, or America’s thrilling venture into space, or the latest catastrophe out of Washington, or the seemingly incomprehensible behavior of a friend she had trusted — whatever it was, she was always intent on explaining it by identifying the ideas at its root. Since abstractions, in her philosophy, are man’s means of grasping and dealing with concretes, she actually used them for that purpose. She would not rest content either with floating theories or with unintelligible news items. She always required a crucial unity: theory and reality, or ideas and facts, or concepts and percepts.

Now I think you can see how Ayn Rand arrived at the most revolutionary element in Objectivism, her theory of concepts. I asked her about this once. She told me that she was talking one day to a Thomist and disagreed with the theory of concepts the man was advancing. “Well, then,” Ayn Rand was asked, “where do you think concepts come from?” “Let me introspect a moment and see what my mind does in forming a concept,” she replied, “because I haven’t yet considered this question.” Whereupon, after a few minutes of silence, she came up with her idea of measurement-omission as the essence of abstraction. I was always astounded by this feat of philosophic creativity; it seemed as though she had solved the problem of the ages by a casual glance inward. But now I think I understand it. What I see is that Ayn Rand’s theory of concepts was implicit from the time of her adolescence in her basic mental approach — in her recognition of the fact that concepts are not supernatural or arbitrary, but rather are instruments enabling men to integrate perceptual data. The rest of her theory of concepts is really an elaboration of this fundamental, although of course it takes a genius to discover such an elaboration.

Ayn Rand regarded ideas as important to human life — as the shaper of man’s character, his culture, his history, his future — because she knew what an idea is. She knew that an idea is not a social ritual, but a means of cognition.

If ideas are as crucial as this, then they must be dealt with properly — which brings me to the center of the present discussion: the specific steps of Ayn Rand’s intellectual method. In her own thinking, she always distinguished the “what,” as she called it, from the “how”: what she knew, and how (by what means) she knew it. If you disagreed with her about a particular conclusion, you did not argue the point for long, because the discussion soon changed to method. To her, the “how” was the burning issue in life; it was the thing that gave rise to the “what.” So let us look at some of the distinctive steps of Ayn Rand’s method. The best way to approach this subject briefly is through the issue of principles.

Ayn Rand thought in terms of principles. In the sense I mean it, this is a rare phenomenon. I personally had never encountered or even imagined it before I met her, and most people have no idea of it at all. Let me start here by giving you an example; it is the one on which I first discovered the issue, about a year after I met Ayn Rand.

I had been taking an ethics course in college and was thoroughly confused about the virtue of honesty. I was not tempted to be dishonest myself, but I did not see how to prove the evil of lying. (I speak throughout of lying in order to gain some value from others, as against lying to defend oneself from criminals, which is perfectly moral.) On my own, I rejected the two dominant schools in regard to honesty: the religious school, which holds that lying is absolutely wrong because God forbids it; and the Utilitarian school, which holds that there are no absolutes and that one has to judge each case “on its own merits,” according to the probable consequences of any given lie. I rejected the first of these as mystical, the second as brute expediency. But what could constitute a third interpretation? I had no idea, so I went to Ayn Rand.

She started her answer by asking me to invent the most plausible lie I could think of. I don’t remember the details any longer, but I know that I did proceed to concoct a pretty good con-man scheme for bilking investors out of large sums of money. Ayn Rand then analyzed the example patiently, for thirty or forty minutes, showing me on my own material how one lie would lead necessarily to another, how I would be forced into contradictory lies, how I would gradually become trapped in my own escalating deceptions, and why, therefore, sooner or later, in one form or another, my con-man scheme would have to backfire and lead to the loss of the very things I was seeking to gain by it. If you are interested in the content of her analysis, I have re-created the substance of this lengthy discussion in my next book, Objectivism: The Philosophy of Ayn Rand.

The point now, however, lies in what happened next. My immediate reaction to her reply was to amend my initial scheme in order to remove the particular weaknesses she had found in it. So I made up a second con-man scheme, and again she analyzed it patiently, showing that it would lead to the same disastrous results even though most of the details were now different. Whereupon, in all innocence, I started to invent a third scheme (I was only 18). But Ayn Rand by this time was fed up. “Can’t you think in principle?” she asked me.

Let me condense into a few paragraphs what she then explained to me at length. “The essence of a con-man’s lie,” she began, “of any such lie, no matter what the details, is the attempt to gain a value by faking certain facts of reality.”

She went on: “Now can’t you grasp the logical consequences of that kind of policy? Since all facts of reality are interrelated, faking one of them leads the person to fake others; ultimately, he is committed to an all-out war against reality as such. But this is the kind of war no one can win. If life in reality is a man’s purpose, how can he expect to achieve it while struggling at the same time to escape and defeat reality?”

And she concluded: “The con-man’s lies are wrong on principle. To state the principle positively: honesty is a long-range requirement of human self-preservation and is, therefore, a moral obligation.”

This was not merely a new ethical argument to me. It was a whole new form of thought. She was saying, in effect: you do not have to consult some supernatural authority for intellectual guidance, nor try to judge particular cases in a vacuum and on to infinity. Rather, you first abstract the essence of a series of concretes. Then you identify, by an appropriate use of logic, the necessary implications or result of this essence. You thereby reach a fundamental generalization, a principle, which subsumes and enables you to deal with an unlimited number of instances — past, present, and future. The consequence, in this example, is an absolute prohibition against the con-man mentality — a prohibition based not on God, but on perception and thought.

Ayn Rand applied this method not only to lying or to moral issues, but to every fact and question she studied. She applied it in every branch of philosophy, from metaphysics to esthetics. If she saw that the sun rises every day, she did not, like David Hume, consider it a puzzling coincidence. She identified the essence of the event: an entity acting in accordance with its nature; and thereby was able to reach and validate the principle of causality. Or, if she admired the novels of Hugo and the plays of Friedrich Schiller, she did not say merely: “I like their grand-scale protagonists.” She identified the essence of such art: the depiction of man as a being with volition; and thereby was able to reach and validate the principle of Romanticism in art. This kind of method is the root of a whole new approach to thought. It led her a step at a time to a philosophy that is neither mystical nor skeptical, but objective; one that neither bases knowledge on revelation nor succumbs to relativism, but that teaches men to conceptualize logically the data of observation. Such a philosophy enables us to discover absolutes which are not supernatural, but rational and this-worldly.

Ayn Rand started thinking in terms of principles, she told me once, at the age of twelve. To her, it was a normal part of the process of growing up, and she never dropped the method thereafter. Nor, I believe, did she ever entirely comprehend the fact that the approach which was second nature to her was not practiced by other people. Much of the time, she was baffled by or indignant at the people she was doomed to talk to, people like the man we heard about in the early 1950s, who was calling for the nationalization of the steel industry. The man was told by an Objectivist why government seizure of the steel industry was immoral and impractical, and he was impressed by the argument. His comeback was: “Okay, I see that. But what about the coal industry?”

The method of thinking in principle involves many complexities, about which I intend, someday, to write an entire volume. But let me mention here a few further aspects, to give you a fuller picture of Ayn Rand’s approach. You recall that, to reach the principle that honesty is a virtue, we had first to grasp the essence of lying. Let us focus now on this issue, i.e., thinking in essentials, which was an essential part of Ayn Rand’s method of thinking.

The concept of “essential” was originated by Aristotle in connection with his theory of definition. He used the term to name the quality that makes an entity the distinctive kind of thing it is, as against what he called the “accidental” qualities. For example, having a rational faculty is essential to being a man. But having blue eyes rather than green is not; it is a mere detail or accident of a particular case. Ayn Rand’s commitment to essentials grew out of this Aristotelian theory, although she modified the concept significantly and expanded its role in human thought.

For Ayn Rand, thinking in essentials was not restricted to the issue of definitions. It was a method of understanding any complex situation by deliberately setting aside irrelevancies — such as insignificant details, superficial similarities, unimportant differences — and going instead to the heart of the matter, to the aspects which, as we may say, constitute the distinctive core or being of the situation. This is something Ayn Rand herself did brilliantly. I always thought of her, metaphorically, as possessing a special power of vision, which could penetrate beneath the surface data that most people see, just as an X-ray machine penetrates beneath the flesh that meets our eyes to reveal the crucial underlying structures.

This kind of penetration is precisely what was lacking in the man I just mentioned, who could see no connection between the steel and the coal industries. Ayn Rand, by contrast, knew at once that steel in this context is a mere detail. She went to the essence of nationalization: government force unleashed against the minds of productive, thinking men — a practice common to countless cases beyond steel, and one that will have a certain kind of effect no matter where it occurs. This is the kind of mental process that is required if one is to reach a generalization uniting many cases. It is the process that is required if one is to champion capitalism as a matter of principle, rather than, like today’s conservatives, clamoring merely for the removal of some random controls.

In the deepest epistemological sense, Ayn Rand was the opposite of an egalitarian. She did not regard every aspect of a whole as equal in importance to every other. Some aspects, she held, are crucial to a proper understanding; others merely clutter up the cognitive landscape and distract lesser minds from the truth. So the task of the thinker is to distinguish the two, i.e., to analyze and process the data confronting him, not to amass mounds of information without any attempt at mental digestion. She herself always functioned like an intellectual detective, a philosophical Hercule Poirot, reading, watching, listening for the fact, the statement, the perspective that would illuminate a whole, tortuous complexity — the one that would reveal the essence and thereby suddenly make that complexity simple and intelligible. The result was often dramatic. When you were with her, you always felt poised on the brink of some startling new cognitive adventure and discovery.

Here is an example of what I mean. In the 1970s, Ayn Rand and I were watching the Academy Awards on television; it was the evening when a streaker flashed by during the ceremonies. Most people probably dismissed the incident with some remark like: “He’s just a kid” or “It’s a high-spirited prank” or “He wants to get on TV.” But not Ayn Rand. Why, her mind wanted to know, does this “kid” act in this particular fashion? What is the difference between his “prank” and that of college students on a lark who swallow goldfish or stuff themselves into telephone booths? How does his desire to appear on TV differ from that of a typical game-show contestant? In other words, Ayn Rand swept aside from the outset the superficial aspects of the incident and the standard irrelevant comments in order to reach the essence, which had to pertain to this specific action in this distinctive setting.

“Here,” she said to me in effect, “is a nationally acclaimed occasion replete with celebrities, jeweled ball gowns, coveted prizes, and breathless cameras, an occasion offered to the country as the height of excitement, elegance, glamor — and what this creature wants to do is drop his pants in the middle of it all and thrust his bare buttocks into everybody’s face. What then is his motive? Not high spirits or TV coverage, but destruction — the satisfaction of sneering at and undercutting that which the rest of the country looks up to and admires.” In essence, she concluded, the incident was an example of nihilism, which is the desire not to have or enjoy values, but to nullify and eradicate them.

Nor did she stop there. The purpose of using concepts — and the precondition of reaching principles — is the integration of observed facts; in other words, the bringing together in one’s mind of many different examples or fields, such as the steel and the coal industries, for instance. Ayn Rand was expert at this process. For her, grasping the essence of an event was merely the beginning of processing it cognitively. The next step was to identify that essence in other, seemingly very different areas, and thereby discover a common denominator uniting them all.

Having grasped the streaker’s nihilism, therefore, she was eager to point out some different examples of the same attitude. Modern literature, she observed, is distinguished by its creators’ passion not to offer something new and positive, but to wipe out: to eliminate plots, heroes, motivation, even grammar and syntax; this represents the brazen desire to destroy an entire art form along with the great writers of the past by stripping away from literature every one of its cardinal attributes. Just as Progressive education is the desire for education stripped of lessons, reading, facts, teaching, and learning. Just as avant-garde physics is the gleeful cry that there is no order in nature, no law, no predictability, no causality. That streaker, in short, was the very opposite of an isolated phenomenon. He was a microcosm of the principle ruling modern culture, a fleeting representative of that corrupt motivation which Ayn Rand has described so eloquently as “hatred of the good for being the good.” And what accounts for such widespread hatred? she asked at the end. Her answer brings us back to the philosophy we referred to earlier, the one that attacks reason and reality wholesale and thus makes all values impossible: the philosophy of Immanuel Kant.

Listening to Ayn Rand that evening, I felt that I was beginning to understand what it means really to understand an event. I went home and proceeded to write the chapter in The Ominous Parallels dealing with Weimar culture, which develops at length Ayn Rand’s analysis of the modern intellectual trend. The point here, however, is not her analysis, but the method that underlies it: observation of facts; the identification of the essential; the integration of data from many disparate fields; then the culminating overview, the grasp of principle.

I use the term “overview” deliberately, because I always felt as though everyone else had their faces pressed up close to an event and were staring at it myopically, while she was standing on a mountaintop, sweeping the world with a single glance, and thus was able to identify the most startling connections, not only between streaking and literature, but also between sex and economics, art and business, William F. Buckley and Edward Kennedy. She was able to unite the kinds of things that other people automatically pigeonhole into separate compartments. Her universe, as a result, was a single whole, with all its parts interrelated and intelligible; it was not the scattered fragments and fiefdoms that are all most people know. To change the image: she was like a ballet dancer of the intellect, leaping from fact to fact and field to field, not by the strength of her legs, but by the power of logic, a power that most men do not seem fully to have discovered yet.

The unity of Ayn Rand’s universe rested on more than I can indicate here. But I want to mention a last aspect of her method, one which is crucial in this regard: thinking in terms of fundamentals.

By “fundamental” I mean that on which everything else in a given context depends, that which is the base or groundwork on which a whole development is built. This concept is necessary because human knowledge, like a skyscraper, has a structure: certain ideas are the ground floor or foundation of cognition, while other ideas, like the upper stories of a building, are dependents, no better or stronger than the foundation on which they rely. Thinking in terms of fundamentals means never accepting a conclusion while ignoring its base; it means knowing and validating the deepest ideas on which one’s conclusion rests.

For instance, in our discussion of honesty, we said that lying is wrong because it is incompatible with the requirements of self-preservation. What base were we counting on? Clearly, a certain ethical theory, the one that upholds self-preservation as man’s proper goal — in contrast to the ethics that advocates self-sacrifice for the sake of others. If you accept this latter theory, our whole argument against lying collapses. Why should a man who is committed to selfless service necessarily tell the truth? What if, as often happens, others want him to lie and claim that it is essential to their happiness?

But this is just the beginning of our quest for fundamentals, because the field of ethics itself rests on the basic branches of philosophy, as you can see in this same example. How did we prove that lying is self-destructive? We said that a policy of lying leads to a war against reality, which no one can win. Well, why can’t anyone? What ideas are we counting on here? Clearly, that there is a reality; that it is what it is independent of our desires; and that our minds are able to know these facts, i.e., to know reality. The issue of lying, in sum, whatever view of it one takes, is merely a consequence. It is a derivative, which rests on a complex philosophic foundation.

Thinking in terms of fundamentals is not an independent aspect of Ayn Rand’s method; it is an inherent part of thinking in principle. If one ignored the issue of fundamentals, his so-called principles would be merely a heap of disconnected, random claims — like a catalog of divine commandments — and they would be of no help in understanding the world or guiding one’s action. One would not be able to prove or even retain the items in such a heap; they would be nothing but floating abstractions. Only ideas organized into a logical structure can be tied to reality, and only such ideas, therefore, can be of use or value to man; and that means principles based on antecedent principles, going back ultimately to the fundamentals of philosophy.

Ayn Rand’s real intellectual interest was emphatically not politics. Of course, she was a champion of capitalism and freedom. But unlike today’s libertarians and conservatives, she was a thinker; she was not content to preach liberty or private property as though they were self-evident axioms. She wanted to know what they depend on and how they can be proved, all the way back to metaphysics and epistemology. This is why she admired Aristotle and Thomas Aquinas even more than she did Thomas Jefferson, and why, to the amazement of today’s businessmen, she hated Kant and Hegel much more than income taxes. It is also why, starting with an interest in political questions, she was led eventually to formulate an overall system of thought, expressing a complete philosophy of life.

Ayn Rand’s mind had an exalted quality, one shared by only a handful of kindred spirits across the ages. Hers was a mind with the profundity of a true philosopher; a mind that greeted the deepest issues of man’s life with solemn reverence and ruthless logic; a mind that derived its greatest joy and its personal fulfillment from the rational study of fundamentals. In our age of mediocrity and anti-philosophy, this fact doomed her to a certain loneliness. It made her a unique personality, unable to find her equal, just as her product, the philosophy of reason that she called Objectivism, is unique and unequaled.

If you want to know what Ayn Rand was like as a person, I can now answer simply: you already know it, because she was just what she had to be given the nature of her intellectual processes. Ayn Rand the person was an expression and corollary of Ayn Rand the mind.

Ayn Rand herself repudiated any dichotomy between mind and person. Her mind, she held, was the essence of her person: it was her highest value, the source of her other values, and the root of her character traits. Thinking, to her, was not merely an interest or even a passion; it was a lifestyle. When she greeted you, for instance, she often asked not “How are you?” but “How’s your universe?” Her meaning was: “How’s your view of the universe? Have the problems of daily life swamped your philosophical knowledge? Or are you still holding on to the fact that reality is intelligible and that values are possible?” Similarly, when you left, she would say not “Goodbye,” but “Good premises.” In other words: “Don’t count on luck or God for success, but on your own thinking.” If self-esteem means confidence in the power of one’s mind, then the explanation of Ayn Rand’s profound self-esteem is obvious: she earned it — both in virtue of the value she ascribed to the mind, and of the meticulous method by which she used her own.

Another result of this method was that attribute men call “strength of character.” Ayn Rand was immutable. I never saw her adapting her personality to please another individual. She was always the same and always herself, whether she was talking with me alone, or attending a cocktail party of celebrities, or being cheered or booed by a hall full of college students, or being interviewed on national television. She took on the whole world — liberals, conservatives, communists, religionists, Babbitts, and avant-garde alike — but opposition had no power to sway her. She knew too clearly how she had reached her ideas, why they were true, and what their opposites were doing to mankind. Nor, like Howard Roark, could she ever be tempted to betray her convictions. Since she had integrated her principles into a consistent system, she knew that to violate a single one would be to discard the totality. A Texas oil man once offered her up to a million dollars to use in spreading her philosophy, if she would only add a religious element to it to make it more popular. She threw his proposal into the wastebasket. “What would I do with his money,” she asked me indignantly, “if I have to give up my mind in order to get it?”

Dedication to thought and thus to her work was the root of Ayn Rand’s person; it was not, however, her only passion. As a result of this root, she held intense values in every department of life. She loved her husband of fifty years, Frank O’Connor, a sensitive, intense man, not nearly as intellectual as she but just as independent and deep in his own quiet way. He is the exception to my statement that she never found an equal. Frank did not have her mind; but his dedication to his work as a painter, his extravagant Romanticism, his innocent, sunlit sense of life, and, I may add, the visible joy he took in her work and in her person — all this made it plain that he did share her soul.

As to Ayn Rand’s other values, I have hardly room here even to mention a sample. Some of them are obvious from her writings, such as America, skyscrapers, modern technology, man the hero, the great romantic artists of the nineteenth century, the silent German movies from her childhood that she always tried to find again, Agatha Christie, TV’s Perry Mason — and there were so many more, from her cats to her lion pictures to her Adrian clothes to her vivid, outsize jewelry to her stamp collecting to her favorite candy (Godiva chocolates) and even her favorite color (blue-green). In every aspect of life, she once told me, a man should have favorites; he should define what he likes most and why, and then proceed to get it. She always did just that — from fleeing the Soviet dictatorship for America, to tripping her future husband on a movie set to get him to notice her, to ransacking ancient record shops to unearth some lost treasure, to decorating her apartment with an abundance of blue-green pillows, ashtrays, and even walls.

Ayn Rand was a woman dominated by values, values that were consistent expressions of a single view of life — which is what you might expect of a great thinker who was at once a moralist and an artist. The corollary is that she had strong dislikes in every department, too. You cannot love something without rejecting just as passionately that which you see as the antithesis of your love. Most people do not know their values clearly or hold them consistently; their desires are correspondingly vague, ambivalent, contradictory. To many such people, Ayn Rand’s violent aliveness and assertiveness were shocking, even intimidating. To me, however, they were a tonic. I felt as though other people were drawn in wishy-washy shades of gray, whereas her soul was made of brilliant color.

Unfortunately — and here I turn for a moment to a somber topic — the wishy-washy people often wanted something from Ayn Rand and were drawn to her circle. A few of them wanted simply to advance their careers by cashing in on her fame and following. Others craved the security they found in her approval. Still others had an element of sincerity during their youth, but turned anti-intellectual as they grew older. These people did what they had to do in order to get from Ayn Rand what they wanted.

What they did usually was to give her the appearance of being the philosophical intelligence she desperately wanted to meet. They were glib, articulate, sometimes even brilliant people. They absorbed the surface features of Ayn Rand’s intellectual style and viewpoint as though by osmosis and then mimicked them. Often, because she was so open, they knew what she wanted them to say, and they said it convincingly. Though uninterested in philosophy and even contemptuous of fundamentals, they could put on an expert act to the contrary, most often an act for themselves first of all. Ayn Rand was not the only person to be taken in by it. I knew most of these people well and, to be fair here, I must admit that I was even more deluded about them than she was.

All of these types ended up resenting Ayn Rand, and even hating her. They felt increasingly bored by the realm of ideas, and chafed under the necessity of suppressing their real self in order to keep up the pretense of intellectual passion. Above all, they found Ayn Rand’s commitment to morality intolerable. In her mind, moral principles were requirements of man’s survival proved by reference to the deepest premises of philosophy; they were thus the opposite of a luxury or a social convention; they were life-or-death absolutes. When she saw a moral breach, therefore — such as dishonesty or moral compromise or power lust or selling one’s soul to the Establishment like Peter Keating — she knew what it meant and where it would lead, and she condemned the individual roundly.

To the types of people we are talking about, this was an unbearable reproach. They could accept Objectivism as pure theory for a while, but only as theory. When they were tested by life, they gave in guiltily, one at a time, to the sundry pressures they encountered, and they shrank thereafter from facing her. Usually they ended up artfully concealing their resentment, saying that they still admired, even adored, Ayn Rand and her philosophy, but not, as they put it, her “moralizing” or her “anger.” Her “moralizing” means the fact that she pronounced moral judgments, i.e., applied her philosophy to real life. Her “anger” in this context means that she took her judgments seriously.

Several of these individuals are now publishing their memoirs in the hopes of getting even with Ayn Rand at last — and also of cashing in on her corpse. At this latter goal, regrettably, some of them seem to be succeeding.

Ayn Rand refused to make collective judgments. Each time she unmasked one of these individuals she struggled to learn from her mistake. But then she would be deceived again by some new variant.

Her basic error was that she took herself as the human standard or norm (as in a sense we all must do, since we have no direct contact with any human consciousness but our own). So if she saw all the outward signs of philosophical enthusiasm and activity, she took it to mean that the individual was, in effect, an intellectual equal of hers, who regarded ideas in the same way she did. After a long while, I came to understand this error. I realized how extraordinary her mind really was, and I tried to explain to her her many disappointments with people.

“You are suffering the fate of a genius trapped in a rotten culture,” I would begin. “My distinctive attribute,” she would retort, “is not genius, but intellectual honesty.” “That is part of it,” I would concede, “but after all I am intellectually honest, too, and it doesn’t make me the kind of epochal mind who can write Atlas Shrugged or discover Objectivism.” “One can’t look at oneself that way,” she would answer me. “No one can say: ‘Ah me! the genius of the ages.’ My perspective as a creator has to be not ‘How great I am’ but ‘How true this idea is and how clear, if only men were honest enough to face the truth.’” So, for understandable reasons, we reached an impasse. She kept hoping to meet an equal; I knew that she never would. For once, I felt, I had the broad historical perspective, the perspective on her, that in the nature of the case she could not have.

In order to be fully clear at this point, I want to make one more comment about Ayn Rand’s anger. Many times, as I have explained, it was thoroughly justified. But sometimes it was not justified. For instance, Ayn Rand not infrequently became angry at me over some philosophical statement I made that seemed for the moment to ally me with one of the intellectual movements she was fighting. On many such occasions, of course, she remained calm because she understood the cause of my statement: that I still had a great deal to learn. But other times she did not; she did not grasp fully the gulf that separates the historic master, to whom the truth is obvious, from the merely intelligent student. Since her mind immediately integrated a remark to the fundamentals it presupposes, she would project at once, almost automatically, the full, horrendous meaning of what I had uttered, and then she would be shocked at me. Once I explained that I had not understood the issue at all, her anger melted and she became intent on clarifying the truth for me. The anger she felt on such occasions was mistaken, but it was not irrational. Its root was her failure to appreciate her own intellectual uniqueness.

I should add here that I never saw her hold an unadmitted grudge. Her anger never festered unexpressed or turned into devious, brooding hatred. It was an immediate, open storm of indignant protest — then it was over. In this respect, she was the easiest person in the world to know and to deal with.

Did I ever get angry at Ayn Rand’s anger at me? Certainly I did. But my anger did not matter to me and did not last. To me, her temper was an infinitesimal price to pay for the values I was gaining from her. The world, I knew, is full of kindly souls who specialize in loving everybody and forgiving everything; but these souls bored me. I wanted out of life that which Ayn Rand alone, in all her fiery genius, had to offer.

This brings me to my final topic. Whatever Ayn Rand’s anger, her disappointments, her pain, they went down, as she said about Roark, only to a certain point. Beneath it was her self-esteem, her values, and her conviction that happiness, not pain, is what matters. People sometimes ask: “But did she achieve happiness in her own life?” My answer would consist of three images.

One is the memory of a spring day in 1957; we were walking up Madison Avenue toward the office of Random House, which was in the process of bringing out Atlas Shrugged. She was looking at the city she had always loved most, and now, after decades of rejection and bitter poverty, she had seen the top publishers in that city competing for what she knew, triumphantly, was her masterpiece. She turned to me suddenly and said: “Don’t ever give up what you want in life. The struggle is worth it.” I never forgot that. I can still see the look of quiet radiance on her face.

Then I see the image of her one night at a party, perhaps twenty years ago now; she was sitting on a couch with some other guests, looking shy, bored, and miserable. Then her husband, who had been working late, arrived, and she called out “Cubbyhole” (her pet name for him), insisting, as she always did, that he squeeze onto the couch beside her so that they could hold hands. And they smiled at each other, and she relaxed visibly, and he patted her hand and called her “Fluff” (his name for her).

Then I see her as she was turning seventy, on the morning when she, Frank, and I came home from the hospital after her lung surgery. It was still difficult for her to walk, but she wanted to play her “tiddlywink” music, as she always called it — gay, lighthearted, utterly cheerful popular tunes from the turn of the century, which have no counterpart today. And she got up and began to march around the living room to the music, tossing her head, grinning at us, marking the beat by waving her little baton, Frank all the while beaming at her from his easy chair. If ever I want to think of a non-tragic spectacle, I remember that.

Ayn Rand did experience unhappiness in her life. But if you ask me: was she a happy person? I have only one answer to give you. She was.

Ladies and gentlemen: in my judgment, Ayn Rand did live by her philosophy. Whatever her errors, she practiced what she preached, both epistemologically and morally. As a result, she did achieve in her life that which she set out to achieve; she achieved it intellectually, artistically, emotionally. But for you to judge these matters yourself and reach an objective view of Ayn Rand, you must be an unusually philosophical kind of person, because you are living in a Kantian, anti-value culture, and you are going to be offered some very opposite accounts of the facts of her life. So you have to know: what is objectivity? What sort of testimony qualifies as evidence in this context? What do you believe is possible to a man — or a woman? What kind of soul do you think it takes to write Atlas Shrugged? And what do you want to see in a historic figure?

I am not a Kantian. I do not believe that we can know Ayn Rand only as she appeared to somebody or other. But if I were to grant that premise for a split second, if I were to agree that we all construe reality according to our own personal preferences, then I would still draw a fundamental moral distinction between two kinds of preferences: between those of the muckrakers and those of the hero-worshipers. It is the distinction between the people who, confronted by a genius, are seized with a passion to ferret out flaws, real or imaginary, i.e., to find feet of clay so as to justify their own blighted lives — as against the people who, desperate to feel admiration, want to dismiss any flaw as trivial because nothing matters to them in such a context but the sight of the human greatness that inspires and awes them. In this kind of clash, I am sure, you recognize where I stand.

I knew Ayn Rand longer than anyone now alive. I do not believe that my view of her is subjective. But if I am to go down in history as her apologist or glamorizer, then so be it. I am proud to be cursed as a “cultist,” if the “cult” is unbreached dedication to the mind and to its most illustrious exponents.

According to the Objectivist esthetics, a crucial purpose of art is to depict man as he might be and ought to be, and thereby provide the reader or viewer with the pleasure of contemplating, in concrete, embodied form, his abstract moral ideal. Howard Roark and John Galt provide this kind of inspiration to me, and to many other people I know. What I want to add in closing is that Ayn Rand in person provided it, too. Because of the power of her mind and the purity of her soul, she gave me, when I was with her, what her novels give me: a sense of life as exaltation, the sense of living in a clean, uplifted, benevolent world, in which the good has every chance of winning, and the evil does not have to be taken seriously. I often felt, greeting her, as though I were entering the Atlantis of Atlas Shrugged, where the human ideal is not merely an elusive projection to be reached somehow, but is real, alive, here — seated across the room on the blue-green pillows, smiling delightedly, eager to talk philosophy with me, eyes huge, brilliant, penetrating.

That is the Ayn Rand I knew. And that is why I loved her.

Medicine: The Death of a Profession

This lecture was delivered at Boston’s Ford Hall Forum in April 1985, published in the April – June 1985 issues of The Objectivist Forum and anthologized in The Voice of Reason: Essays in Objectivist Thought in 1989.

 

One day, when you are out of town on a business trip, you wake up with a cough, muscle aches, chills, and a high fever. You do not know what it is, you start to panic, but you do know one action to take: you call a doctor. He conducts a physical exam, takes a history, administers lab tests, narrows down the possibilities; within hours, he reaches a diagnosis of pneumonia and prescribes a course of treatment, including antibiotics. Soon you begin to respond, you relax, the crisis is over. Or: you are getting out of your car, you fall and break your leg. It is a disaster, but you remain calm, because you can utter one sentence to your wife: “Call the doctor.” He proceeds to examine your leg for nerve and blood-vessel injury, he takes X-rays, reduces the fracture, puts on a cast; the disaster has faded into a mere inconvenience, and you resume your normal life. Or: your child comes home from school with a stabbing pain in the abdomen. There is only one hope: you call the doctor. He performs an appendectomy — the child recovers.

We take all this completely for granted, as though modern drugs, modern hospitals, and modern doctors were facts of nature, which always had been there and which always will be there. Many people today take for granted not only the simpler kinds of medical intervention, but even the wonder cures and wonder treatments that the medical profession has painstakingly devised — like the latest radiation therapy for breast cancer, or the intricate delicacy of modern brain surgery, or such a breathtaking achievement as the artificial-heart implants performed by Dr. William C. DeVries. Most of us expect that the doctors will go on accomplishing such feats routinely, steadily removing pain and thus enhancing the quality of our life, while adding ever more years to its quantity.

America’s medical system is the envy of the globe. The rich from every other country, when they get sick, do not head for Moscow or Stockholm or even London anymore; they come here. And in some way, despite the many public complaints against the medical profession, we all know this fact; we know how good our doctors are, and how much we depend on their knowledge, skill, and dedication. Suppose you had to go on a six-month ocean voyage with no stops in port, with ample provisions and sailors, but with only one additional profession represented on board, and you could decide which it would be. Would you ask for your lawyer to come along? your accountant? your congressman? Would you dare even to ask for your favorite movie star? Or would you say: “Bring a doctor. What if something happens?” The terror of having no answer to this question is precisely what the medical profession saves us from.

I am not saying that all doctors are perfect — they are not; or that they all have a good bedside manner — they do not; or that the profession is free from flaws — like every other group today, the medical profession has its share of errors, deficiencies, weaknesses. But these are not my subject tonight, and they do not alter two facts: that our doctors, whatever their failings, do give us the highest caliber health care in world history — and that they live a grueling existence in order to do so.

I come from a medical family, and I can tell you what a doctor’s life is like. Most of them study nonstop for years in medical school and then work nonstop until they die. My own father, who was a surgeon, operated daily from 7 A.M. until noon and then made hospital rounds; from 2 to 6 P.M., he held office hours. When he came home for dinner, if he did, the phone never stopped ringing — it was nurses asking instructions, or doctors discussing emergency cases, or patients presenting symptoms. When he got the chance, usually late at night or on Sundays after rounds, he would read medical journals (or write for them), to keep abreast of the latest research. My father was not an exception. This is how most doctors, in any branch of medicine, live, and how they work.

The profession imposes not only killing hours, but also continuous tension: doctors deal all the time with crisis — with accidents, diseases, trauma, disaster, the imminence of death. Even when an ailment is not a mortal threat, the patient often fears that it is, and he must be reassured, nursed through the terror, even counseled psychologically by the physician. The pressure on the doctor never lets up. If he wants to escape even for the space of a single dinner on the town, chances are that he cannot: he will probably get beeped and have to rush to the emergency room just as the entree is being served.

The doctor not only has to live and work in such a pressure cooker, he has to think all the time — clearly, objectively, scientifically. Medicine is a field that requires a vast body of specialized theoretical knowledge; to apply it properly to particular cases, the doctor must regularly make delicate, excruciatingly complex decisions. Medical treatment is not usually a cut-and-dried affair, involving a simple, self-evident course of action; it requires the balancing of countless variables; it requires clinical judgment. And the doctor must not only exercise such judgment — he must do it fast; typically, he has to act now. He cannot petition the court or his client or any employer for a postponement. He faces daily, hourly, the merciless timetable of nature itself.

What I personally admire most about doctors is the fact that they live this kind of life not out of any desire for altruistic self-sacrifice, but selfishly — which is the only thing that enables them to survive it. They love the field, most of them; they find the work a fascinating challenge in applied science. They are proud men, most of them, with an earned pride in their ability to observe, evaluate, act, cure. And, to their credit, they expect to be rewarded materially for their skill; they want to make a good living, which is the least men can offer them in payment for their achievements. They make that living, as a rule, by standing on their own, not as cogs in some faceless, government-subsidized enterprise, but as entrepreneurs in private practice. The doctors are among the last of the capitalist breed left in this country. They are among the last of the individualists that once populated this great nation.

If I knew nothing about today’s world but the nature of our politicians and the philosophy represented by the medical profession, I would predict an inevitable, catastrophic clash between the two: between the government and the doctors. On purely theoretical grounds, I would predict the destruction of the doctors by the government, which in every field now protects and rewards the exact opposite of thought, effort, and achievement.

This catastrophe is actually taking place. It will affect your future as well as that of the doctors.

To understand what is happening in medicine today, we must go back to the beginning, which in this case is 1965, the year when Medicare and Medicaid were finally pushed through Congress by Lyndon Johnson. Medicare covers most of the medical expenses of those over sixty-five, whatever their income. Medicaid is a supplemental program for the poor of any age.

Those of us who opposed the Johnson plan argued at the time that government intervention in medicine is immoral in principle and would be disastrous in practice. No man, we claimed, has a right to medical care; if he cannot pay for what he needs, then he must depend on voluntary charity. Government financing of medical expenses, we argued, even if it is for only a fraction of the population, necessarily means eventual enslavement of the doctors and, as a result, a profound deterioration in the quality of medical care for everyone, including the aged and the poor.

The proponents of Medicare were unmoved by any arguments. Altruistic service to the needy, they said, is man’s duty. It is degrading, they said, for the elderly to be dependent on private charity; a “means test” is incompatible with human dignity. Besides, they added, the government would not dream of asking for any control over the doctors or over their methods of patient care. All we want the state to do, they said, is pay the bills.

It is now twenty years later. Let us look at what actually happened.

The first result of the new programs should have been self-evident. Suppose we apply the same principle to nutrition. Suppose President Johnson had said: “It is unfair for you to have to pay for your own food and restaurant bills. Men have a right to eat. Washington, therefore, will pick up the tab.” Can you project the results? Can you imagine the eating binges, the sudden mania for dining out, the soaring demand for baked peacock tongues and other gourmet delicacies? Do you see Lutèce and the ’21’ Club becoming nationally franchised and starting to outdraw McDonald’s? Why not? The eaters do not have to pay for it. And the food industry, including its most sincere members, is ecstatic; now that the money is pouring from Washington into the grocery chains and the restaurants, they can give every customer the kind of luxury treatment once reserved for millionaires. Everybody is happy — except that expenditure on food becomes so great a percentage of our GNP, and the drain on the federal treasury becomes so ominous, that every other industry starts to protest and soon even the bureaucrats begin to panic.

This is what happened to medical spending in the United States. The patients covered by the new programs no longer had to pay much attention to cost — that was the whole purpose of the programs. And the health-care professionals at first were generally delighted. Now, many of them felt, the sky is the limit, and they proceeded to build hospitals, purchase equipment, and administer tests accordingly. Medical expenditures in the U.S. were 4.3% of GNP in 1952; today they are about 11% and still rising. Medicare expenditures doubled from 1974 to 1979, doubled again by 1984, and are expected to double again by 1991, at which time, according to current estimates, the Medicare program will be bankrupt. Something, the government recognized, has to be done; we are going broke because of the insatiable demand for medical care.

The government did not decide to cancel its programs and return to a free market in medicine — when are disastrous government programs ever canceled? Instead, it did what governments always do: it decided to keep the programs but impose rigid controls on them. The first step was a campaign to force hospitals not to spend much on Medicare patients, no matter what the effects on the health of those patients.

We will no longer, officials said, pay hospitals a fee for each service they render a Medicare patient. That method of payment, they said, simply encourages spending. Instead, we will pay according to a new principle, DRGs. DRGs represent the first major assault by the government against the doctors and their patients. It is not yet the strangulation of the medical profession. But it is the official dropping of the noose around their necks.

DRG means “diagnosis-related group.” According to this approach, the government has divided all ailments into 468 possible diagnoses, and has set in advance a fixed, arbitrary fee for each: it will pay a hospital only what it claims is the average cost of the ailment. For example, for a Medicare patient in the Western Mountain region who is admitted to a hospital with a heart attack and finally recovers enough to go home, the government now pays the hospital exactly $5,094 — no more and no less. And it pays this amount no matter what the hospital does for the patient, no matter how long his stay or how short, no matter how many services he requires or how few. If the patient costs the hospital more than the government payment, the hospital loses money on him. If he costs less, the hospital makes a profit.

Here is a fictional story now in process of becoming reality around the country. A man suffering from severe chest pains is taken by ambulance to the hospital. He receives certain standard tests, including a cardiogram, then is moved to the Intensive Care Unit, where his vital signs are continuously monitored. His doctor thinks that in this instance a further test, an angiogram, is urgently indicated; this test would outline the arteries of the heart and indicate if one is about to close off, an event that could be fatal. The hospital administrator protests: “An angiogram is expensive. It costs up to $1,000, about 20% of our total fee for this man, and who knows what else he’s still going to cost us? You can’t prove this test is necessary. Let’s wait and see.” The test is not given. Maybe the patient lives, maybe not. Several days later, the administrator comes to the doctor: “You’ve got to get this man out of the ICU. It’s costing almost $800 per day, and he’s been here now for five days. What with everything else, we’ve already spent almost the whole payment we get for him.” The doctor thinks that the patient still desperately needs the specialized nursing available only in the ICU. The administrator overrules him. “There’s an area of judgment here,” he says. “We”ll just have to take a bit of a chance on this case.”

Or: the doctor decides that the patient is an excellent candidate for remedial heart surgery. A bypass operation, he thinks, would probably prolong the man’s life considerably while relieving him of pain. But the man, after all, is elderly and the operation would involve a lengthy hospital stay. “Let’s try a more conservative treatment first,” the administrator says, “let’s give him some medication and wait and see.” Again, maybe the patient lives, maybe not.

Let us say that he lives and is moved to a bed in the regular ward. He still feels very weak, and the doctor does not think he is anywhere near ready to be discharged. But the $5,094 has long since been spent, and the administrator starts to wonder aloud: “Maybe this man could manage somehow at home. In any event, he’s eating us alive — get him out of here.” Maybe the patient will survive at home, maybe not.

Do you see the thrust of the system? If the hospital does relatively little for the patient, it makes money; if it provides an extensive range of services, it loses heavily. The best case from its viewpoint is for the patient to die right after admission: the hospital still gets the full fee. The worst case is for him to survive with complications and require a lengthy stay — which is why some hospitals are refusing to admit patients they fear will linger on too long.

I do not mean to suggest that our hospitals are now callously withholding urgently needed treatment from Medicare patients. Today’s hospitals and doctors do have integrity; most are continuing to do their best for the patient. The point is that they have to do it within the DRG constraints. The issue is not simply: treat the patient or let him die. The issue is: treat him how? At what cost? With what range of services, specialists, and equipment? With what degree of safety or of risk? This is the area where there is enormous room for alternatives in the quality of medical treatment. And this is the area that is now in the process of being slashed across the board for Medicare patients, the very people singled out by the liberals in the 1960s as needing better medical care.

To revert to our nutrition analogy: it is as though the government socialized eating out, paying restaurants only what it computed to be the average cost per meal. There would then be a powerful incentive for restaurants to cut corners in every imaginable way — to serve only the cheapest foods in the smallest amounts in the cheesiest settings. What do you think would happen to the nation’s eaters — and its chefs — under such a setup? How long could the chefs preserve their dedication to preparing haute cuisine, when the restaurant owners, in self-preservation, were forced to fight them at every step and to demand junk food instead?

There is now a new and deadly pressure on the doctors, which continuously threatens the independence and integrity of their medical judgment: the pressure to cave in to arbitrary DRG economies, while blanking out the effects on the patient. In some places, hospitals are offering special financial incentives to the physician whose expenditure per patient averages out to be relatively low. For example, the hospital might subsidize such a doctor’s office rent or purchase new equipment for him. On the other hand, a doctor who insists on quality care for his Medicare patients and thereby drives up costs is likely to incur the hospital’s displeasure. In the extreme case, the doctor risks being denied staff privileges, which means cutting off his major source of livelihood. Thanks to DRGs, a new conflict is in the offing, just starting to take shape: the patient vs. the hospital. To put it another way, the conflict is: doctors vs. hospitals — doctors fighting a rearguard action to maintain standards against hospitals that are forced by the government to become cost-cutting ogres. How would you like to practice a profession in which half your mind is devoted to healing the patient, while the other half is trying to appease a hospital administrator who himself is trying to appease some official in Washington?

Medicare patients are not a small group. Because of their age, they constitute a significant part of most doctors’ practice. Medicare patients now make up about 50 percent of all hospital admissions in the U.S.

The defenders of DRGs answer all criticisms by saying that costs simply must be cut. Even under complete capitalism, they say, doctors could not give unlimited treatment to every patient. This is true, but it ignores two crucial facts. It is because of government programs that medical prices have soared to the point of being out of reach for masses of patients. This was not true in the days of private medicine. The average American a generation ago could afford quality, in medicine as in every other area of life, without courting bankruptcy. Even if a patient could not afford it, at least, in the pre-welfare-state era, he was told the truth: as a rule, he was told about the treatment options available, and it was up to him, in consultation with his doctor, to weigh the possibilities and decide how to cut costs. But under the present system, the hospital not only has to cut services drastically — it is to its interest to conceal this fact from the patient. If he or his family ever learns that the angiogram he is not going to have, or the heart surgery, would make all the difference to the outcome of his case, he would immediately protest, insist on the service, even threaten to launch a malpractice suit. The system is rigged to squeezing every drop of quality out of medical care, so long as the patient does not understand what is happening. The patient does not know medicine; he relies on the doctor’s integrity to tell him what services are available and necessary in his case — yet, increasingly, the hospitals must try to batter down that integrity. They must try to make the doctor keep silent and not tell the patient the full truth.

The Medicare patient is no longer a free man to be accorded dignity and respect, but a puppet on the dole, to be manipulated accordingly — while the doctor is being transformed from a sovereign professional into a mere appendage and accessory, a helpless tool in a government-orchestrated campaign of shoddy quality and deception.

The government’s takeover of medical practice is not confined to public patients; it is starting to extend into the private sector as well. This brings me to the HMOs, which are now mushrooming all over the country.

HMO means “health-maintenance organization.” It could also have been called BBM, for “bargain-basement medicine.” In this setup, a group of doctors, perhaps with their own hospital, offers prepaid, all-inclusive medical care at a cheap rate. For a fixed payment in advance, a payment substantially less than a regular doctor would charge, the patient is guaranteed virtually complete coverage of his medical costs, no matter what they are. The principle here is the same as that of the DRG system: if the patient’s costs exceed his payment, the HMO loses money on him; if not, it makes a profit.

Although HMOs are privately owned, the spread of these organizations is wholly caused by government. There were very few HMOs in the days of private medicine. As part of the government’s campaign to lower the cost of medical care, however, Washington has decided to throw its immense weight behind HMOs, even going so far as to advertise nationally on their behalf and to give them direct financial subsidies.

How do HMOs achieve their low rates? In essence, by the DRG method — the method of curtailing services. In this case, however, the cuts in quality are more sweeping, inasmuch as the HMO embraces every aspect of medical care, not merely hospital costs. As a rule, HMO doctors do not have personal patients, nor does the patient have a choice of doctors or even necessarily see the same one twice — that is too expensive. The patient sees whoever is on duty when he shows up; the doctor gives up the luxury of following a case from beginning to end. Nor does the doctor have much time to spend with a given patient — HMOs are generally understaffed to save money; typically, there are long waiting lines of patients. Further, the doctor must obtain prior authorization of any significant expenditure from a highly cost-conscious administrator. The doctor may detect a possible abdominal tumor and request a CAT scan — in effect, an exquisitely detailed, 3-D X-ray. But if the administrator says to him: “It costs a lot. I don’t think it’s necessary,” the doctor is helpless. Or he may find that the patient has an aneurysm, a weakening of an artery that is like a time bomb waiting to go off, and he may want to operate to remove it. But the administrator can reply: “These cases often go years without rupturing. Let’s wait awhile.” like the doctor under DRGs, the HMO doctor ultimately has to obey: he either keeps his costs within the dictated parameters, or he is out of work.

The kind of doctor who is willing or eager to practice medicine under these conditions represents a new breed, new at least in quantity. There is a generation of utterly unambitious young doctors growing up today, especially conspicuous in the HMOs, doctors who are the opposite of the old-fashioned physician in private practice — doctors who want to escape the responsibility of independent thought and judgment, and who are prepared to abandon the prospect of a large income or a private practice in order to achieve this end. These doctors do not mind the forfeit of their professional autonomy to the HMO administrator. They do not object to practicing cut-rate medicine with faceless patients on an assembly-line basis, so long as they themselves can escape blame for any bad results and cover their own tracks. These are the new bureaucratic doctors, the MDs with the mentality, and the fundamental indifference to their job, of the typical post-office clerk.

I hasten to add that there are better doctors in the HMOs (and that some HMOs are better than others). As a rule, however, these better doctors are mercilessly exploited. Being conscientious, they put in longer hours than necessary, trying to make up for the chronic understaffing. They do not give in meekly to arbitrary decrees on cost, but fight the administrator when they feel their own judgment is right. Increasingly, their professional life becomes a series of such fights, which makes them the heavies, hard to get along with and guilty of costing the HMO money — while their lesser colleagues capitulate to the system, do as they are told, and take things easy. Time after time, the better men step in to bail out such colleagues, struggling to correct their errors, clean up their messes, rescue their patients. At a certain point, however, the better doctors get fed up.

An HMO doctor in California, a qualified internist and a highly conscientious woman, told me the following story. “I was looking through a pile of cardiograms one day,” she said, “and I saw one that was clearly abnormal. I knew that the man should be taken by ambulance to the emergency room for retesting and possible hospitalization. Then I thought: it’s late Friday afternoon, and it’s going to take an hour and a half, and I’m not being paid for the extra work, and who will know if I wait until Monday? I was tempted for a minute to drop the whole thing and go home, but then the remnants of my conscience made me get up wearily and telephone the patient. This sort of thing,” she concluded, “happens all the time and not just to me, and often the doctor does simply look the other way.” Do you see what happens under a system in which the doctor is penalized for his virtue or, at the least, is deprived of any incentive, spiritual or material, including pride in his judgment and payment for his work? Would you like your cardiogram to be in a pile on this new breed’s desk? Yours is next — all of ours are.

The debased standards inherent in government medicine are now spreading to the whole of medical practice in the United States. The new medicine is not restricted to Medicare patients or to HMO members; it is soon going to engulf private doctors as well, even when they see their own private, paying patients. There are many reasons for this. The most obvious is the pressure from the health-insurance companies, such as Blue Cross and Blue Shield. Hospitals now are charging higher rates to private patients in order to recoup their losses from Medicare cases. As a result, the private insurance companies are demanding that a DRG-type system be imposed uniformly, on all patients. They want private insurance policies from now on to pay only according to arbitrary, preset rates, just as Medicare does now, which would put the total of medicine in this country — all patients, all doctors, all ailments — into the same category as the heart-attack patient we discussed earlier. His fate would become everyone’s, and the standards of American medicine would simply collapse.

If this demand of the insurance companies surprises you, remember that there are no truly private health-insurance companies in the U.S. today. What we have in this field is a government-protected, government-regulated cartel. And what the cartel wants is not more freedom, but more money through government favors, including stiffer government controls over medical costs.

The end of the Medicare road is complete socialized medicine.

Now you can see the absurdity of the claim that state payment of medical bills will not affect the freedom of physicians or the quality of patient care. State funding necessarily affects and corrupts every private service. Communism, in fact, is essentially nothing more than state funding. The Soviets pretty much leave doctors and everyone else free to dream or fantasize within their own skulls; all the government does is fund everything, i.e., take over the physical means of every citizen’s existence. The enslavement of the country, and thus the collapse of all standards, follows as a matter of course.

Now let me backtrack to answer an objection. I have been maintaining that the cause of our soaring health-care costs is government funding of medical care. Many observers, however, claim that the cause is the rapid advances in medical technology, such as CAT scanners or the latest, most sophisticated disease-detecting instruments, the magnetic resonance imaging or MRI machines. These people want to limit such technology or even abolish it.

Technology by itself does not drive up costs; it generally reduces costs as it improves the quality of life. The normal pattern, exemplified by the automobile and computer industries, is that a new invention is expensive at first, so that only a few can afford it. But inventors and businessmen persevere, aiming for the profits that come from a mass market. Eventually, they discover cheaper and better methods of production. Gradually, costs come down until the general population can afford to buy. No one is bankrupted, everyone gains.

The source of today’s national bankruptcy in the field of medicine is not technology, but technology injected into the field by government decree, apart from supply and demand. State-of-the-art medical treatment — including new inventions or procedures that are still prohibitively expensive, such as liver transplants and long-term kidney dialyses — is now being financed by the government for the total population in the name of egalitarianism. The result is the unbelievable expenditures, far beyond most people’s capacity to afford, which are made routinely in our hospitals. These expenditures are particularly evident in regard to the terminally ill, who almost always fall under the umbrella of some government-supported insurance program. It has been estimated that 1 percent of our GNP is now spent on the dying in their last weeks of life. Or: one-­half of a man’s lifetime medical expenses occur now in the last six months of his life.

In a free society, you personally would have to make a choice: do you want to defer consumption, cancel vacations, forgo pleasures year after year, so as to extend your life in the ICU by a few months at the end? If you do, no one would interfere under capitalism. You could hoard your cash and then have a glorious spree in the hospital as you die. I would not care to do this. It does not bother me that some billionaire can live months longer than I by using machinery that I cannot begin to afford. I would rather be able to make ends meet, enjoy my life, and die a bit sooner. But in a free society, you are not bound by my decision; each man makes and finances his own choice. The moral principle here is clear-cut: a man has a right to act to sustain his life, but no right to loot others in the process. If he cannot afford some science fiction cure, he must learn to accept the facts of reality and make the best of it.

In a free society, the few who could afford costly discoveries would, by the normal mechanism, help to bring the costs down. Gradually, more and more of us could afford more and more of the new technology, and there would be no health-cost crisis at all. Everyone would benefit, no one would be crushed. The terminally ill would not be robbing everyone else of his life, as is happening now, thanks to government intervention; the elderly would not be devouring the substance of the young.

You may wonder if I have now covered, at least in essence, the ways in which government is wrecking the practice of medicine. I have barely scratched the surface. For example, I have not even mentioned the formal introduction of the principle of collectivism into medical practice — of committee-medicine as against individual judgment. This is exemplified by the flourishing PROs in our hospitals, the Professional Review Organizations, which act to oversee and strengthen the various DRG controls. PROs are committees of doctors and nurses established by the government to monitor the treatment of Medicare patients, and especially to cut its cost — committees with substantial power to enforce their arbitrary judgments on any dissenting doctor. These committees are the equivalent in the Medicare system of the HMO administrators, and have potentially the same kind of all-encompassing power to forbid hospital stays (along with the associated tests and surgical procedures), even when the admitting doctor thinks they are required.

Nor have I yet mentioned CONs, or Certificates of Need. Since the government regards anything new in the field of medicine as potentially expensive, a hospital today is prohibited from growing in any respect, whether we speak of more beds or new technology, unless the administrator can prove “need” to some official. Since “need” in this context is undefined and unprovable, the operative criterion is not “need” at all, but pull, political pull. Under this program, the government [in 1984] denied Sloan-Kettering, the famous New York cancer hospital, permission to purchase an MRI machine, because another New York hospital already had one. Later, the government backed down in the face of the resulting public uproar. But what about the hospitals that do not enjoy such fame or contacts, and that are inexplicably denied the right to acquire a crucial diagnostic tool? So far, the freeze on them is only partly effective. Doctors are still allowed to purchase new equipment for their own offices, which hospital patients now often use. But the government is fighting to close this loophole; it is on the verge of decreeing that private doctors in their own offices out of their own funds cannot purchase new equipment without a government certificate of “need.” Here again you can see how your care will be affected, even if you are not a Medicare patient. If your doctor or hospital is not allowed to have the equipment, you cannot benefit from it either. It isn’t there. It doesn’t exist.

Nor have I mentioned the hundreds of other government interventions in medicine. In the space of a year, state legislatures alone recently enacted almost three hundred pieces of health-cost containment legislation. One hospital in New York now reports to ninety-nine separate regulatory agencies.

And I have not yet touched on what is perhaps the worst crisis in the field of medicine today, the one most demoralizing to the doctors: the malpractice crisis. This crisis illustrates dramatically, in yet another form, the lethal effects of government intervention in the field of medicine.

Medical malpractice suits have trebled in the past decade. There are now [1985] about sixteen lawsuits for every hundred doctors. In addition, awards to plaintiffs average around $330,000 and are steadily climbing. The effect of this situation on physicians is unspeakable. First, I have been told, there is fear, chronic fear, the terror of the next attorney’s letter in the mail. Then there is the agony of drawn-out legal harassment, including endless depositions and a protracted trial. There is the exhaustion of feeling that one lives in a malevolent universe, in which every patient is a potential enemy. Always, there is the looming specter: a career-destroying verdict. And whatever the verdict, win or lose, there is the fact that all the doctors, innocent and guilty alike, are paying for it. They are paying for the exorbitant awards in the form of unbelievable insurance premiums — over $100,000 per year per physician in some places.

In response to this situation, doctors are forced to engage wholesale in “defensive medicine,” i.e., the performing of unnecessary tests or procedures solely in order to build a legal record and thereby prevent the patient from suing later. For example, I heard about the case of a man falling and bumping his head slightly. Since there was no evidence of any head injury, there was no basis, in the doctor’s judgment, to order an expensive series of skull X-rays. But if he does not order it, he takes a chance: if months or even years later, the man should develop mysterious headaches, the doctor might be sued. He might be charged retroactively with negligence, since he omitted a test that might have shown something that might have enabled him to prevent the headaches. So the doctor has no choice; he has to order the tests to protect himself. By a conservative estimate, defensive medicine now accounts for one­-third of all health-care costs.

Since the medical profession did not suddenly turn evil or irresponsible in the last several years, we must ask what is the cause of the soaring lawsuits. The most immediately apparent answer lies in the law, which has now lost any pretense at rationality. The standards of liability are corrupt. Negligence, in any rational sense of the term, is no longer the legal standard. Today’s standard demands of the doctor not responsible care, but omniscience and omnipotence.

For example, if a doctor prescribes a drug that is safe by every known test, and years later it is discovered to have side effects undreamed of at the time, the doctor can be sued. Was he negligent? No, merely not omniscient. If he treats a patient with less than the most expensive technology, whether the patient can afford it or not, he can be sued. “You open yourself to a malpractice suit,” says an attorney in the field, “if you even give the appearance of letting financial considerations conflict with good patient care.” 1 Arthur R. Chenen, “Prospective Payment Can Put You in Court,” Medical Economics, July 9, 1984. Or: if a baby has a birth defect that can be ascribed to the trauma of labor, the obstetrician can be sued for not having done a Caesarian, even though there were no advance indications in favor of one — because, as one obstetrician puts it, people assume “that anything less [than perfection] is due to negligence.” 2 Allan Rosenfield, quoted in Susan Squire, “The Doctors’ Dilemma,” New York, March 18, 1985. This last statement actually reveals the operative principle of the law today, not of some crackpot left-wing radical, but of the law: the patient is entitled to have whatever he wishes, regardless of cost or means; it makes no difference what doctors know, or whether the money exists; the patient’s desire is an absolute, the doctor is a mere serf expected to provide all comers with an undefined “perfect care” somehow.

Do you see where this idea comes from? It is the basic principle that underlies and gave birth to Medicare. “You the patient,” Washington said in the 1960s, “need do nothing to earn your medical care or your cures. From now on you need merely wish, and the all-powerful government will do the rest for you.” Well, now we see the result. We see the rise of a generation of patients (and lawyers) who believe it, who expect treatment and cures as a matter of right, simply because they wish it, and who storm into court when their wish is frustrated.

The government not only inculcates such an attitude, but makes it seem financially feasible as well, because Washington has poured so much money into the field of medicine for so long. How else could anyone afford the defensive tests, or the inflated medical prices necessary to help pay for the incredible malpractice awards? They could not have been afforded in a free-market context. In the days of private medicine, there was no malpractice crisis; there was neither the public psychology nor the irresponsible funding that it requires. But now, thanks to government, there is both. And there is also a large enough corps of unscrupulous lawyers who are delighted to cash in on the disaster, lawyers who are eager to extort every penny they can from conscientious, bewildered, and in most cases utterly innocent doctors — while grabbing off huge contingency fees for themselves in the process.

The only solution to the malpractice crisis is a rational definition of “malpractice,” which would restrict the concept severely, to cases of demonstrable negligence or irresponsibility, within the context of objective definitions of these terms, taking into account the knowledge and the money available at the time. But this approach is impossible until the government gets its standards and its cash out of the medical business altogether.

We are all kept alive by the work of man’s mind — the individual minds that still retain the autonomy necessary to think and to judge. In medicine, above all, the mind must be left free. Medical treatment, as I have said, involves countless variables and options that must be taken into account, weighed, and summed up by the doctor’s mind and subconscious. Your life depends on the private, inner essence of the doctor’s function: it depends on the input that enters his brain, and on the processing such input receives from him.

What is being thrust now into the equation? It is not only objective medical facts any longer. Today, in one form or another, the following also has to enter that brain: “The DRG administrator will raise hell if I operate, but the malpractice attorney will have a field day if I don’t — and my rival down the street, who heads the local PRO, favors a CAT scan in these cases, I can’t afford to antagonize him, but the CON boys disagree and they won’t authorize a CAT scanner for our hospital — and besides the FDA prohibits the drug I should be prescribing, even though it is widely used in Europe, and the IRS might not allow the patient a tax deduction for it, anyhow, and I can’t get a specialist’s advice because the latest Medicare rules prohibit a consultation with this diagnosis, and maybe I shouldn’t even take this patient, he’s so sick — after all, some doctors are manipulating their slate of patients, they accept only the healthiest ones, so their average costs are coming in lower than mine, and it looks bad for my staff privileges. . .” Would you like your case to be treated this way — by a doctor who takes into account your objective medical needs and the contradictory, unintelligible demands of ninety-nine different government agencies and lawyer squads? If you were a doctor, could you comply with all of it? Could you plan for or work around or deal with the unknowable? But how could you not? Those agencies and squads are real, and they are rapidly gaining total power over you and your mind and your patients.

In this kind of nightmare world, if and when it takes hold fully, thought is helpless; no one can decide by rational means what to do. A doctor either obeys the loudest authority; or he tries to sneak by unnoticed, bootlegging some good health care occasionally; or he gives up and quits the field.

Now you can understand why Objectivism holds that mind and force are opposites — and why innovation always disappears in totalitarian countries — and why doctors and patients alike are going to perish under socialized medicine if its invasion of this nation is not reversed.

Conservatives sometimes observe that government, by freezing medical fees, is destroying the doctors’ financial incentive to practice. This is true enough, but my point is different. With or without incentive, the doctors are being placed in a position where they literally cannot function — where they cannot think, judge, know what to do, or act on their conclusions. Increasingly, for a man who is conscientious, today’s government is making the practice of medicine impossible.

The doctors know it, and many have decided what to do about it. In preparation for this talk, I spoke to or heard from physicians around the country. I wanted to learn their view of the state of their profession. From New York to California, from Minnesota to Florida, the response was almost always the same: “I’m getting out of medicine.” “I can’t take it any more.” “I’m putting every cent I can into my pension plan. In five years, I’ll retire.”

Such is the reward our country is now offering to its doctors, in payment for their life-saving dedication, effort, and achievements.

As to talented newcomers rising to replace the men who quit, I want to point out that medical-school enrollments are dropping. Bright students today, says the president of the Mount Sinai School of Medicine, are “discouraged by the perception of growing government regulation of medicine.” 3 James F. Glenn, quoted in “Professional Schools” Enrollment Off,” New York Times, Feb. 10, 1985. Note that it is bright students about whom he speaks. The other kind will always be in ample supply.

Any government program has beneficiaries who fight to keep the program going. Who is benefiting from the destruction of the doctors? It is not the poor. A generation ago, the poor in this country received excellent care through private charity, comparatively much better care than they are going to get now under the DRG and HMO approaches. The beneficiary is not the poor, but only one subgroup among them: those who do not want to admit that they are charity cases, those who want to pretend that they are entitled to medical handouts as a matter of right. In other words, the beneficiary is the dishonest poor, who want righteously to collect the unearned and consider it an affront even to have to say “Thank you.” There is a second beneficiary: the new 9-to-5, civil-servant doctor, the kind who once existed only on the fringes of medicine, but who now basks in the limelight of being a physician and healer, because his betters are being frozen out. And there is one more kind of beneficiary: the medical bureaucrats, lobbyists, legislators, and the malpractice lawyers — in short, all the force-wielders now slithering out of their holes, gorging themselves on unearned jobs, money, fame, and/or power, by virtue of having sunk their fangs into the body of the medical profession.

Altruism, as Ayn Rand has demonstrated, does not mean kindness or benevolence; it means that man is a sacrificial animal; it means that some men are to be sacrificed to others. Our country today is a textbook illustration of her point. The competent doctors, along with their self-supporting patients, are being sacrificed — to the parasites, the incompetents, and the brutes. This is how altruism always works. This is how it has to work, by its nature.

The doctors resent today’s situation passionately. Many of them are ready to quit, but not to fight for their field — at least, not to fight in the manner that would be necessary, if they were to have a chance of winning. In part, this is because the doctors are frightened; they sense that if they speak out too loudly, they may be subject to government reprisals. Most of all, however, the doctors feel guilty. Their own professional motivation — the personal, selfish love of their field and of their mind’s ability to function — is noble, but they do not know it.

For ages they have had it pounded into them that it is wrong to have a personal motivation, wrong to enjoy the material rewards of their labor, wrong to assert their own individual rights. They have been told over and over that, no matter what their own private desires, they should want to sacrifice themselves to society. And so they are torn now by a moral conflict and silenced by despair. They do not know what to say if they quit, or how to protest their enslavement. They do not know that selfishness, the rational selfishness they embody and practice, is the essence of virtue. They do not know that they are not servants of their patients, but, to quote Ayn Rand, “traders, like everyone else in a free society — and they should bear that title proudly, considering the crucial importance of the services they offer.” If the doctors could hear just this much and learn to speak out against their jailers, there would still be a chance; but only if they speak out as a matter of solemn justice, upholding a moral principle, the first moral principle: self-preservation.

Thereafter, in practical terms, they — and all of us — could advocate the only solution to today’s crisis: removing its primary cause. This means: closing down Medicare. Reducing Medicare’s budget is not the answer — that will simply tighten the DRG noose. The program itself must be abolished. In principle, the method is simple: phase it out in stages. Let the government continue to pay, on a sliding scale, for those who are already too old to save for their final years, but give clear notice to the younger generations that there is a cutoff age, and that they must begin now to make their own provision for their later medical costs.

Is there still time for such a step? The most I can answer is: in ten years, there won’t be — that is how fast things are moving. In ten years, perhaps even in five, our medical system will have been dismantled. Most of the best doctors will have retired or gone on strike, and the government will be so entrenched in the field that nothing will get rid of it.

If you are my age, you may sneak by with the rest of your lifespan, relying on the remnants of private medicine that still exist. But if you are in your teens, twenties, thirties, then you are too young to count on such a hope. To you in particular, I want to conclude by saying: find out what is going on in this field — don’t take my word for it — and then act, let people know the situation, in whatever way is open to you. Above all, talk to your doctor. If you agree with the Declaration of Independence, tell him that he, too, comes under it; that he, too, is a human being with a right to life; and that you want to help protect his freedom, and his income, on purely selfish grounds.

If you are looking for a crusade, there is none that is more idealistic or more practical. This one is devoted to protecting some of the greatest creators in the history of this country. It is also literally a matter of life and death — your life, and that of anyone you love. Don’t let it go without a fight.

Assault from the Ivory Tower: The Professors’ War Against America

This lecture was delivered at Boston’s Ford Hall Forum on April 24, 1983, published in The Objectivist Forum in October – December 1983 and anthologized in The Voice of Reason: Essays in Objectivist Thought in 1989.

Intellectuals around the world generally take a certain pride, whether deserved or not, in their own countries’ achievements and traditions. When they lash out at some group, it is not their nation, but some villain allegedly threatening it, such as the rich, the Jews, or the West. This pattern is true of Canada, from which I originally came, and it is true to my knowledge of England, France, Germany, Russia, China. But it is not true of America. One of the most striking things I observed when I first came here was the disapproval, the resentment, even the hatred of America, of the country as such and of most things American, which is displayed by American intellectuals; it is especially evident among professors in the humanities and social sciences, whom I came to know the best.

Typically these professors regard the American political system, capitalism, as barbaric, anachronistic, selfish. They tell their classes that the American past is a record of brutal injustice, whether to the poor, or to the Third World, or to the fish, or to the ethnic group of the moment. They describe the American people as materialistic, insensitive, racist. They seem to regard most things European or Oriental or even primitive as interesting, cultured, potentially deep, and anything characteristically American — from rugged individualism to moon landings to tap dancing to hamburgers — as junk, as superficial, vulgar, philistine. When the New Left, taught by these same professors, erupted a while back, the student rebels expressed their philosophy by desecrating the American flag — blowing their noses in it, or using it to patch the seat of their pants. I do not know another country in which anti-patriotism has ever on such a scale been the symbol of an ideology.

It happened here because America at root is an ideology. America is the only country in history created not by meaningless warfare or geographical accident, but deliberately, on the basis of certain fundamental ideas. The founding fathers explicitly championed a certain philosophy, which they made the basis of America’s distinctive political institutions and national character, and that philosophy to some extent survives among the citizens to this day. That is why the professors I mentioned can feel at home and at peace anywhere else in the world, but not here: the fundamental ideas of the founding fathers are anathema to today’s intellectuals.

The war against America mentioned in the title of my talk is not a political or anticapitalist war as such; that is merely a result, a last consequence. The war I want to discuss is deeper: it is the assault against the founding philosophy of this country that is now being conducted by our universities. This war is being conducted not only by radicals and by leftists, but also by most of the mainstream, respectable moderates on the faculties. There are exceptions; there are professors still carrying on some traditions from a better era. But these men are not a power in our colleges, merely a remnant of the past that has not yet fully died out.

The basic philosophic credo of the United States was eloquently stated two centuries ago by Elihu Palmer, a spokesman of the revolutionary era. “The strength of the human understanding,” he wrote, “is incalculable, its keenness of discernment would ultimately penetrate into every part of nature, were it permitted to operate with uncontrolled and unqualified freedom.’’ At last, he says, men have escaped from the mind-destroying ideas of the Middle Ages; they have grasped “the unlimited power of human reason,” “reason, which is the glory of our nature.” Now, he says, men should feel “an unqualified confidence” in their mental powers and energy, and they should proceed to remake the world accordingly. 1 Principles of Nature (New York: 1801); excerpted in Ideas in America, ed. by G.N. Grob and R. N. Beck (Free Press: 1970), pp. 81 –84.

Such was the basic approach of the men who threw off the shackles of a despotic past and built this nation.

Now let me quote, more or less at random, from some modern college teachers. In preparation for this talk, I asked Objectivists around the country to tell me what they are being taught in college on basic issues. I received a flood of eloquent mail and clippings, for which I am very grateful, and I would like to share some of it with you.

First, an excerpt from a textbook on The Craft of Writing prepared by some professors of rhetoric at Berkeley:

“What do Plato’s opinions, or any other writer’s opinions we might choose to study, have to do with learning to write? Everything. Before anything good can come out of writing, the students must at least sense the presuppositions of the writer in his civilization. And the first presupposition is this: we do not really know, surely and indubitably, the answer to any important question. Other cultures know such answers, or think they do, and writing is consequently a very different experience for them. But we, collectively, do not. . . . It would be very comfortable to be able to act upon the basis of immutable truth, but it is not available to us.’’ 2 W.J. Brandt, R. Beloof, L. Nathan, and C.E. Selph (Prentice Hall: 1969), p. 23. Note here the statement of pure skepticism: truth or knowledge is not available to us — offered as a flat statement, uncontroversial, even self-evident.

Next I quote from The Washington Post, from a story about a symposium held at Catholic University, dealing with Galileo’s intransigent defense of his beliefs against the Inquisition. At one point, a prominent Harvard astronomer made an offhand comment contrasting Galileo’s attitude toward scientific beliefs with that of modern scientists. “Today in science,” the professor said, “there is no ‘belief’ as such, only probability.”

A man in the audience, visibly emotional, stood up [the story continues]. “I cannot credit it. I cannot believe you would say” that scientists do not really “believe” in the objects they study. . . . “Do you really think it’s possible that [astronomical science] is all wrong?” he demanded. Yes,” said [the astronomer]. “It is possible.”

We cannot, he went on, know that there are atoms or what stars are. The reporter then summarizes the astronomer’s conclusion:

Scientists now cannot fail to remember that absolute reality collapsed just after the turn of the century, with Einstein. . . . Since then, one simply cannot speak of certainties, of what is real and what is not. “I cannot believe it,” muttered the man in the audience as he sat down. 3 Philip J. Hilts, “Caught Between Faith and Fact,” Sept. 26, 1982, p. H1.

He better believe it. This viewpoint is standard today; the latest scientific discoveries, we are told regularly, invalidate everything we thought we once knew, and prove that reality is inaccessible to our minds. If so, one might ask, what is it that scientists are studying? If we can know nothing, how did Einstein arrive at his discoveries and how do we know that they are right? And if certainty is unattainable and inconceivable, how can we decide how close we are to it, which is what a probability estimate is? But it is no use asking such questions, because the cause of modern skepticism is not Einstein or any scientific discoveries.

Now let me tell you about another incident. One Objectivist undergraduate at Columbia University wrote, for a composition course, a research paper presenting the founding fathers’ view of reason. The paper was sympathetic to the founding fathers’ view, though not explicitly so. The teacher several times put question marks beside phrases that bothered her (e.g., beside “facts of reality”) or wrote marginal comments such as “Do you really believe this?” At the end, she summed up: “The paper is very well written. . . . It’s difficult for me to see how we can write about ‘reason’ without the nineteenth century’s sad discovery in mind — that . . . [the belief that] reason will help us get better and better meant naiveté in many senses. Let’s discuss.” In the discussion, the student told me, the teacher said that the nineteenth century had established the inability of reason to know reality. Freud in particular, she said, had refuted the founding fathers. “He showed that man is really an irrational creature, and that the Enlightenment idea that all our problems can be solved by reason is quite unjustified.” 4 College Composition I, F1101 Y:01, Spring 1980. In cases such as this, to protect the privacy of students, I am citing only the course number and/or year (when known to me).

The founding fathers, as thinkers of the Enlightenment era, championed the power of man’s unaided intellect. It was on this basis, after centuries of European tyranny, that they urged the right to liberty, which was the right of each man to rely in action on his own mind’s judgment. They upheld this right because they believed that the human mind is reliable — that, properly employed, it can reach a knowledge of reality and give the individual the guidance he needs to live. The individual, they held, does not have to submit blindly to any authority, whether church or state, because he has within himself a brilliant and potent cognitive tool to direct him. That tool is the power of reason, the “only oracle” he needs — “oracle” in the sense of a source of absolute, objective truth.

There is no such truth, said the antipode and destroyer of the founding fathers’ legacy. I mean the philosopher Immanuel Kant. Kant is the basic cause of the modern anti-reason trend. He is the man who, two hundred years ago, launched an unprecedented attack on the power of the human mind, declared that reason is in principle incapable of knowing reality, and thereby put an end to the Enlightenment. Freud was merely one of his many heirs, as are the modern skeptics who distort Einstein’s findings to rationalize their viewpoint, as are the rhetoric professors at Berkeley and all their like-minded colleagues. In countless forms, Kant’s rejection of reason is at the root of our modern colleges.

Question, debate, dispute — the founding fathers urged men — because by this means you will reach answers to your questions and discover how to act. Question, debate, dispute — our Kantianized faculty urges today — not to find the answers, but to discover that there aren’t any, that there is no source of truth and no guide to action, that the Enlightenment viewpoint was merely a comfortable superstition or a naiveté. Come to college, they say, and we’ll cure you of that superstition for life. Which, unfortunately, they often do. “On the first day of classes,” a student from Kent State University in Ohio wrote me, “my English professor said the purpose of college is to take a high school graduate who’s sure of himself and make him confused.”

“Kent fulfills that objective perfectly,” the writer adds, not only in its insistent pro-skepticism propaganda, but also in its very method of presenting the course material. “Its courses are a hodgepodge of random and contradictory information that can’t possibly be integrated into a consistent whole, and one of the first things it teaches its students is not to bother to try. The typical Kent graduate leaves the school feeling bewildered . . . vaguely pleased that his bewilderment must mean he came out of college smarter than when he went in, and vaguely displeased that his enlightened confusion hasn’t made him happier than it has.’’ 5 Fall 1969. This is an exact description of many current graduates, and unfortunately not only in Ohio. That English professor’s statement of the purpose of college was not a wisecrack; it was meant, and practiced, as a serious pedagogical principle. We have reached a variant of the inverted slogans of Orwell’s 1984: the claim to knowledge, we are being taught, betrays ignorance. Knowledge is Ignorance, but Confusion is Enlightenment. That is what you can hope to achieve after tens of thousands of dollars in tuition and four years of study and agonizing term papers — a B.C. degree, Bachelor of Confusion.

If no one can know the truth, you might ask, why are these professors bothering to pursue their subjects at all? Some claim to be attaining probability, by unspecified means. But some are more modern and more frank. Here is another teacher from Columbia, this time from the Graduate School of Business, who offers a course entitled “Individual and Collective Behavior.” According to one of his students, this teacher stated in class “that psychological theories cannot be proved. He added that this was a good thing, since it provided scope for further research.” 6 B9706, sec. 101, Spring 1982.

Do you follow the reasoning here? If we could prove a psychological theory, that would eliminate a whole area of research; there would be no need to investigate that particular question, because we would already have established the answer. On the other hand, if we can never know, we can go on looking forever, with no ugly barriers, such as knowledge, to stand in the way. But why then look? Why is research good if we never prove anything by it? Obviously, it is an end in itself. One does research in order to get research grants from the government, in order to write papers and get promotions so that other researchers can attack one’s papers and thereby get more grants to finance more research for more studies, forever; with a voluminous literature on the weirdest, most senseless subjects pouring out, which everyone must study and no one can keep up with or integrate, and with everyone agreeing that none of it proves anything — all of it a giant academic con game divorced from cognition, from human life, from reality. Such is the nature of research under the reign of skepticism.

No one, however, can be a consistent skeptic; a man devoid of all knowledge would be like a newborn baby, unable to act or function at all. Despite their viewpoint, therefore, skeptics have to find something to rely on and follow as a guide, and what most of them choose to follow ultimately is: the opinion of others, the group, society.

Kant gave this approach a complex philosophic defense. There are, he says, two realities. There is reality as it is in itself, which is unknowable. And there is the reality we live in and deal with, the physical world, which, he says, mankind itself creates; the physical world, he says, is created by subjective but universal mechanisms inherent in the human mind. An idea that is merely the product of an individual brain, in this view, may or may not be acceptable; but an idea universal to the mind of the species can necessarily be relied on, because that defines reality for us; that is what creates reality, at least our private, subjective, human reality. Under all its complexities and qualifications (and there are mountains of them) this doctrine amounts to saying: the individual’s mind is helpless, but the group, mankind, is cognitively all-powerful. If mankind collectively thinks in terms of a certain idea, that is truth, not the objective, real truth, of course, we can’t know that; but subjective, human truth, which is the only truth we can know.

The founding fathers, being champions of reason, were champions of the individual. Reason, they held, is an attribute of each man alone, by himself; the power of the mind means the power of the individual. With today’s anti-reason trend, however, such individualism simply disappears. In our colleges today, therefore, alongside Kant’s skepticism about true reality, there is also the other element of Kant, the one systematically promoted by Hegel and Marx: the exaltation of the social. The student gets a powerful double message: you can’t know anything, there is no certainty—and: society knows, you must adapt to its beliefs, who are you to question the consensus?

Here is an example of the second from a psychology textbook written by a professor at the University of North Carolina. Let me preface this by saying that philosophers before Kant used to distinguish two sources of knowledge: experience (which led to empirical knowledge) and reason (rational knowledge). These two were conceived, with whatever errors, as capacities of the individual enabling him to reach truth. Now here are the new, Kantian definitions. “Empirical knowledge is the agreement in reports of repeated observations made by two or more persons. Rational knowledge is the agreement in results of problem solving by two or more persons.’’ 7 William S. Ray, The Science of Psychology (Macmillan: 1964), p. 5. In other words: the genus of knowledge is agreement; the fundamental of knowledge is a social consideration, not the relationship of your mind to reality, but to other men. The individual by himself, on a desert island, cannot learn, he is cut off from the possibility of any knowledge, because he cannot tabulate agreement or disagreement. Empirical observation is not using your eyes, but taking a Gallup poll of others’ reports on their eyes. Rational knowledge is not achieved by your brain grasping a logical argument; it is “agreement in results of problem solving” — and if men happen not to agree, for whatever reason or lack of reason, then there is no rational knowledge. This is nothing less than public ownership of the means of cognition, which, as Ayn Rand observed, is what underlies the notion of public ownership of the means of production.

If you want to see both Kantian elements — skepticism and the worship of the social — come together, consider the field of history today. Here is an excerpt from a course description at the University of Indiana (Bloomington); the course is titled “Freedom and the Historian.”

History is made by the historian. Each generation of historians reinterprets the past in the light of its own historical experience and values. . . . There can be thus no one definitive history of Alexander and no one historical truth about the fall of the Roman Empire. . . . There have been as many concepts of history, as many views of historical truth, as there have been cultures. 8 Course number H300, cross-listed as History K492, sec. 2856; date unknown.

The skeptical theme here is clear — there is “no one definitive history,” “no one historical truth.” An old-fashioned person, even of a skeptic mentality, would react: “Well, then, let’s close down the field, if we can’t know the truth.” But not the moderns. We can’t know the real truth, they say, but we can know the subjective truth that we ourselves create. “History is made by the historian.” If there is a consensus of historians, therefore, their viewpoint is valid and worth studying, for that time and culture. As in Kant, there are two realities: the real past (unknowable), and the private past each generation creates, its own subjective historical truth. Notice that in this viewpoint the historian is at once helpless and omnipotent: he can know nothing really; but on the other hand he is the creator of history, of the history that we can know, and so he is an unchallengeable authority. If any student disagrees with the fraternity of historians, therefore, he has no chance. On the one side, he hears: “Who are you to know? There are no definitive facts.” On the other, he hears: “History is made by the historian. Who are you to question it?”

Observe what people allow themselves when hiding behind a group. If the author of that course description were to say: “History is made by me,” he would be dismissed as a paranoid personality. But when he says it collectively: “History is made by us, by our guild, by historians,” that is acceptable. This is the Kantian exaltation of the social.

There is a further development of Kant’s approach beckoning here. Why, historians soon began to ask, should the social authority be universal? Why can’t there be many groups of historians, each creating history in accordance with its own mental structure, each version being true for that group though not for the others? Why, in effect, shouldn’t we be democratic and let every collective into the act? The result of this line of thinking is pressure-group history, a pluralization of the Kantian approach, in which every group rewrites the past according to its own predilections, and every group’s views are deemed to be as valid (or invalid) as every other group’s. To be progressive in history today means precisely this: it means to respect the rewriting of all the newest groups, especially if their spokesmen make no sense to you; that shows that you are open-minded, and are not trying to impose your group’s private views on others. To each his own subjectivism.

Is this an exaggeration? A prominent history professor at Stanford University, Carl Degler, recently made a plea for women’s history, explaining that history varies subjectively from men to women. He declared: “The real test of the success of affirmative action for women will come not by counting the number or proportion of women in a department or profession, but by the extent to which men . . . are willing to accept the new and peculiar interests of women as legitimate and serious, even when those interests are strikingly novel and perhaps even bizarre when compared with current acceptable work in a given field.’’ 9 “Women Approach History Differently — and Men Must Understand the Difference,” Stanford Observer, Oct., 1982, p. 2; reprinted from Chronicle of Higher Education, Sept. 15, 1982. Emphasis added. [Emphasis added.]

I once heard a feminist intellectual on television declare that the central fact of the ages is rape, and that the culmination of the historical process is the discovery of the clitoral orgasm, which has finally freed women from men. This is surely an approach to history which is “strikingly novel and even bizarre,’’ but we mustn’t be chauvinistic; history is made by historians, and if a certain group begins to push a certain line, and organizes into a new pressure-unit, that line becomes true, true for these people, as true as any other claim in a world where no one can really know anything. This is what I call Kantianized history.

The founding fathers, as men of the Enlightenment, were champions of dispassionate objectivity; any form of subjectivism, or of emotion-driven cognition, was considered reprehensible by them. The opposite is true today. If objectivity is not possible to man, as the Kantians hold, then in the end anything goes, including any kind of emotionalism; and the humanities and social sciences end up, not as academic disciplines teaching facts, but as the preserve of shifting lobbyists disseminating sheer propaganda, which is what is happening increasingly in our colleges.

History is merely one example of it. The field of anthropology offers another eloquent illustration. First we read, a few months ago, about the scandal of Margaret Mead. In her famous 1928 book Coming of Age in Samoa, Miss Mead presented an idyllic picture of life in Samoa. The natives, she claimed, were gentle, peaceful, open, devoid of jealousy, free of stress. It was Rousseau over again (the noble savage), and Miss Mead’s implicit moral was: the superiority of primitive culture over competitive, repressed Western society. Now, finally, a true scholar, Derek Freeman, an anthropologist from New Zealand, has set the record straight. After years of study in Samoa, he concluded that the Samoans [I quote The New York Times’s summary] “have high rates of homicide and assault, and the incidence of rape in Samoa is among the highest in the world. . . . [The Samoans] live within an authority system that regularly results in psychological disturbances ranging from compulsive behaviors to hysterical illnesses and suicide. They are extremely prone to fits of jealousy.” Etc. Miss Mead’s claims, in sum, “are fundamentally in error and some of them preposterously false.” 10 Edwin McDowell, “New Samoa Book Challenges Margaret Mead’s Conclusions,” Jan. 31, 1983, p. C21.

Judging by what one can gather from the press, anthropologists had known some of this for some time, but few had wanted to challenge Miss Mead publicly. Why not? Aside from a nature-nurture controversy that became involved here, two main reasons were operative, as far as I can make out.

One was the feeling that Miss Mead’s viewpoint — her endorsement of primitive society over Western civilization — is noble, moral, good. The second is a pervasive subjectivism, which makes a potential dissenter feel: “I can’t be sure, anybody can claim to prove or disprove anything, anthropology is whatever anthropologists say, why start a fight with a saint of the field for nothing?”

Now couple this episode with another recent scandal in anthropology. Did you read about the doctoral candidate from Stanford who, while studying in Red China, found that abortions were being forcibly performed on helpless women after the sixth month of pregnancy (when it is a dangerous, bloody practice), and who published this news in a Taiwanese weekly complete with photographs? The Chinese were furious, though the truth of his charges is not debated; and the Stanford Anthropology Department expelled the student from Stanford for unethical conduct — in effect, so far as one can decipher the department’s statements, for blowing the whistle on his host country, an allegedly unforgivable academic sin. As one radio talk-show host in New York, Barry Farber, asked rhetorically: can you imagine the Stanford Anthropology Department expelling a student for doing exactly the same thing in regard to South Africa, i.e., for publishing articles about that regime’s racial crimes? Such a student would have been treated as an academic hero.

The double standard involved in the two cases is appalling. One scholar, Margaret Mead, who condemns the West, becomes a revered figure for decades, even though her factual claims are dead wrong. Another, who prints the uncontested truth about a communist dictatorship, is expelled from his discipline. Is this fairness? Is this objectivity? Or is this the complete politicization of the field? But we must remember: the Kantians declare that there is no objectivity, and that truth is whatever the group wants it to be. In the social sciences today, the teachers do not leave much doubt about what they want it to be.

I must quote one further example of today’s subjectivist trend, simply to indicate to you how brazen it is becoming. A recent issue of The National Law Journal describes a new development in the teaching of law in our universities, a development sponsored by a Harvard law professor, a law professor from SUNY (Buffalo), a sociologist from the University of Pennsylvania, and many others. These men “agree that an objective legal mode of reasoning, distinguishable from the society where it is being applied and the people applying it and capable of yielding an inevitable result, does not exist; that law, by its mask of objectivity, functions chiefly to legitimize social and economic inequities in the eyes of the lower classes as a way of keeping them docile; that because democracy is a good and the law a shell, the goal is to found a government not by law but by people.” 11 Ben Gerson, “Professors for the Revolution,” Aug. 23, 1982, p. 10.

This statement is a union of Kant and Marx. Let me translate it. “There is no objective legal reasoning; law pretends to be objective, but really it is an instrument of the wealthy to keep the poor docile; law, in effect, is the opiate of the masses” — these are law professors speaking, mind you — “and our goal should be a system run not by law, but by people.” How are the people to govern themselves, if not by reference to an objective code of laws? How are they to settle their disputes and resolve conflicting claims? In this context, there is only one alternative to government by law: government by pressure group, i.e., by every sizable pack or tribe in the land struggling to seize control of the legislature and the courts, and then ramming its arbitrary desires down the throats of the rest, until they rebel and start ramming their desires, etc. — all of it a naked exercise in power politics, of group-eat-group, without the pretense of objectivity or justice.

One of the great achievements of Western civilization was the concept of a society in which men are not left helplessly at the mercy of clashing groups, but can resolve disputes fairly, as individuals, by reference to impersonal principle. This is what used to be called a government of laws and not of men. Today we have the frightening spectacle of law professors telling us that what we need is a government of men and not of laws. If this school needs a name, it should call itself “Lawyers for Gang Warfare.”

You may be wondering whether things are better in the physical sciences today. They are, somewhat, but science, too, depends on philosophy. Modern science arose in an Aristotelian period, a period characterized by respect for reason and objective reality, and it cannot survive the collapse of that philosophy. One sign of this is the skepticism among scientists illustrated by the Harvard astronomer I quoted earlier. But there is another, even more ominous sign. I mean the claims made by an increasing number of physicists that modern physics is growing closer to Oriental mysticism; you may have heard the tributes that these scientists now lavish on works such as the Upanishads and the I Ching. In a rather mild statement, one such scientist wrote recently that there is a “curious connection between the sub-rational and the super-rational. Intuition, sudden flashes of insight, and even mystical experiences seem to play a role in the restructuring of science.” This quote, by the way, is from a textbook written by the Head of the Astrophysics Department at the University of Colorado (Boulder).

I have said that men cannot be consistent skeptics. One way out is to turn for guidance to society. But there is another way: old-fashioned mysticism — the turning not to society, but to the supernatural. Although this method was hardly originated by Kant, here, too, his influence is at work today. Our minds cannot know reality, Kant said, but certain of our feelings — our unprovable, nonconceptual, nonrational feelings — can give us a hint as to its nature. This Kantian suggestion — that the mind is helpless, but feelings may be able to replace it as a cognitive faculty — was taken up in the nineteenth century by a whole school of Romanticists, such as Schopenhauer and Nietzsche, who admired and agreed with the essential ideas of Kant, and proceeded to unleash a flood of overt irrationalism, often including a deep admiration for Oriental mysticism. Today, this particular development has also become widespread in the West; you can see it in everything from art to psychotherapy to diet fads, and it is showing up now even in physics. If scientists do not have a rational philosophy to guide them, they, too, have to sink back ultimately into the common horde.

If you wonder what kind of physics is being produced by these mystical scientists, let me quote one paragraph from the Colorado textbook. The passage occurs in the context of an attack on the concept of reality.

Even more disruptive to our notions of reality is the recognition that it is impossible to describe the entirety of an object at one time. Because of the finite speed of light no object has an instantaneous existence. All extended objects are fuzzy time averages. In order for an object to be totally present at a given instant of time, instantaneous communication would be required. Since that is impossible, all parts of an object exist in the past of every other part.

Our present does not exist. One not only needs a clairvoyant to foretell the future but also to foretell the present.

The name of this textbook, by the way, is The Fermenting Universe. 12 J. McKim Malville (Seabury Press: 1981), pp. 44, 18. I do not say that this book is typical of our college science, not yet. What I do say is this: it is significant, it is frightening, that such a book by an author in such a prestigious position is even possible.

As to the wider meaning of the latest scientific theorizing taken as a whole, I will leave it to an intellectual historian from SUNY (Oswego) to comment. This professor seems to agree with all the skeptical and mystical modern interpretations of science. In a lecture entitled “The Collapse of Absolutes,” he sums up for his students:

What does all this mean? Well, first of all, it means that the universe has become unintelligible. . . . Secondly, scientists themselves have become humble and admit that science may never be able to observe reality. . . . Thirdly, the physical world of Einstein has become something that even the most educated layman finds difficult to understand . . . He in short finds it incomprehensible and irrational. 13 Lecture by Thomas Judd; date and course title unknown.

In other words, if the college student runs to science as an escape from the humanities and the social sciences, he is learning there, too, that the mind is impotent.

Philosophy sets the standards for every school and department within a university. When philosophy goes bad, corrupt manifestations turn up everywhere. Visit Stanford’s Graduate School of Business, for instance, and audit a course titled “Creativity in Business” offered to MBA candidates. I quote the San Francisco Chronicle:

The students [in this course] learn meditation and chanting, analyze dreams, paint pictures, study I Ching and tarot cards. . . . The course reading includes I am That by Swami Muktananda . . . Precision Nirvana . . . Yoga Aphorisms. . . . One woman who had been a Moonie earlier in her life was fearful after a couple of sessions that she was getting into the same sort of thing, said [the professor]. It’s nothing of the kind, he added, but the heavy emphasis on developing the intuitive side of a student’s mind, where creativity is expressed, can sometimes leave that impression.

There are, this professor teaches his students, two main blocks to creativity. One is fear; the other is: “the endless chattering of the mind.” 14 Jerry Carroll, “Over-Achievers Swarm to This Exotic Class,” Feb. 17, 1983, p. 46. If mysticism is the fashion among scientists, why not among our future business leaders, too?

According to The Chronicle of Higher Education, the Moonies and the Hare Krishnas have become a problem to the colleges. “Many administrators . . . agree that religious cults have found college campuses to be among their more profitable recruiting grounds in recent years.” 15 Lawrence Biemiller, “Campuses Trying to Control Religious Cults,” April 6, 1983. This is hardly a mystery. The colleges, by means of what they are teaching, are systematically setting the students up to be taken over. The Reverend Moon or his equivalent will be the ultimate profiteer of today’s trends if these are not stopped.

Now let us switch fields and turn to the area of sex education. I suggest you read a text widely used in junior high and high schools, cited by the American Library Association as one of the “Best Books for Young Adults in 1978.” The book claims, to impressionable teenagers, that anything in the realm of sex is acceptable as long as those who do it feel no guilt. Among other practices, the book explicitly endorses transvestism, prostitution, open marriage, sado-masochism, and bestiality. In regard to this latter, however, the book cautions the youngsters to avoid “poor hygiene, injury by the animal or to the animal, or guilt on the part of the human.” 16 Quoted by Diane Ravitch, “The New Right and the Schools,” American Educator, Fall 1982, p. 13. Professor Ravitch does not give the book’s title.

If you want still more, turn to art — for instance, poetry — as it is taught today in our colleges. For an eloquent example, read the widely used Norton’s Introduction to Poetry, and see what modern poems are offered to students alongside the recognized classics of the past as equally deserving of study, analysis, respect. One typical entry, which immediately precedes a poem by Blake, is entitled “Hard Rock Returns to Prison from the Hospital for the Criminal Insane.” The poem begins: “Hard Rock was ‘known not to take no shit / From nobody’ . . .” and continues in similar vein throughout. This item can be topped only by the volume’s editor, who discusses the poem reverently, explaining that it has a profound social message: “the despair of the hopeless.” 17 Ed. by J. Paul Hunter, 2nd ed. (Norton: 1981). Just as history is what historians say, so art today is supposed to be whatever the art world endorses, and this is the kind of stuff it is endorsing. After all, the modernists shrug, who is to say what’s really good in art? Aren’t Hard Rock’s feelings just as good as Tennyson’s or Milton’s?

Now I want to discuss the cash value of the trends we have been considering. The base of philosophy is metaphysics and epistemology, i.e., a view of reality and of reason. The first major result of this base, its most important practical consequence, is ethics or morality, i.e., a code of values.

The founding fathers held a definite view of morality. Although they were not consistent, their distinctive ethical principle was: a man’s right to the pursuit of happiness, his own happiness, to be achieved by his own thought and effort — which means: not an ethics of self-sacrifice, but of self-reliance and self-fulfillment — in other words, an ethics of egoism, or what Ayn Rand called “the virtue of selfishness.” The founding fathers built this country on a twofold philosophical basis: first, on the championship of reason; then, as a result, on the principle of egoism, in the sense just indicated. The product of this combination was the idea: let us have a political system in which the individual is free to function by his own mind and for his own sake or profit. Such was the grounding of capitalism in America.

Just as our modern colleges have declared war on the first of these ideas (on reason), so they have declared war on the second. Here again they are following Kant. Kant was the greatest champion of self-sacrifice in the history of thought. He held that total selflessness is man’s duty, that suffering is man’s destiny in life, and that any egoistic motive, any quest for personal joy and any form of self-love, is the antonym of morality.

The Dean of Arts and Sciences at Colgate University expressed a similar viewpoint clearly in some convocation remarks he offered in 1981, attacking what he saw as an epidemic of egoism on campus. Egoism, the dean claimed, necessarily means whim-worship. Here is his definition of egoism: “serving the self, or taking care of number one . . . mindless hedonism and a concern for me, me now.” Where did he get this definition? Why can’t an egoist be enlightened, rational, long-range? No answer was given. The proper path for us to follow, the dean went on, was indicated by the “socially concerned” students of the sixties, with their “emphasis on duty to others” and on “the ascetic mode.” We may leave aside here the actual moral character of those violent, drug-addicted rebels of the sixties so admired by the dean. The point is the choice he offers: mindless hedonism versus asceticism — note the word — i.e., utter self-abnegation, renunciation, sacrifice. Today’s students, the dean said disapprovingly, attend college for reasons such as “to get a better job and to make more money.” This, he said, is wrong. “It is . . . my hope for you that you will recognize that there is life outside the self, that we live in a world that cries out for those with visions of a community founded upon just principles. . . . and [I] wish that preoccupation with self will give way to concern for others.” 18 Founders Day Convocation remarks, Sept. 8, 1981, reprinted in Colgate Scene, Oct. 1981, pp. 1–2.

Professors sometimes take sides in a controversy, but deans, to my knowledge, never do. When a dean makes an ideological statement, you can be sure that it is a universally accepted bromide on campus.

Our colleges are allegedly open to all ideas, yet on the fundamental issues of philosophy we hear everywhere the same rigid, dogmatic viewpoint, just as though the faculties were living and teaching under government censorship. I visited Columbia’s graduation exercises last year, and the priest who delivered the invocation declared to the assembled graduates: “The age of individual achievement has passed. When you come to Columbia, you are not to be motivated by the desire for money, or personal ambition, or success; you are here to learn to serve. And my prayer for you today is that at the end of your life you will be able to say, ‘Lord, I have been an unworthy servant.’“ If that priest had come out with a plug for the Communist party, it would have caused a stir; if he had upheld the superiority of Catholicism, ditto. But to state as self-evident the moral code common to both caused not a murmur of protest.

A social psychologist from Harvard, who also regards that code as self-evident, has devised a test to measure a person’s level of moral reasoning. This test is the basis of many of the new courses in morality now being offered in schools around the country. The testers give the student a hypothetical situation and several possible responses to it. He then chooses the response that best fits his own attitude. Here is a typical example. “Your spouse is dying from a rare cancer, and doctors believe a drug recently discovered by the town pharmacist may provide a cure. The pharmacist, however, charges $2,000 for the drug (which costs only $200 to make). You can’t afford the drug and can’t raise the money.”

Before we proceed to the answers, observe what moral lessons a student would absorb from the statement of the problem alone. Morality does not pertain to normal situations, it is not concerned with how to live, he learns, but with how to meet disaster, death, terminal cancer. The obstacle to his values, he learns, is greed, the greed of the pharmacist who is trying to exploit him by charging ten times the cost of the product. There is no mention of any effort the pharmacist might have exerted to discover the drug, no mention of any research or thought or study required of him in order to have discovered an unprecedented cure for cancer, no mention of any other costs he might have incurred, no question of any gratitude to the man who alone has created the power to save the spouse, no mention of any reason why that pharmacist, counter to every principle of self-interest, would overcharge for the drug when he would make more money in the long run by selling it in greater quantity at a lower price, as the whole history of mass production shows. All of this — in an exercise designed to teach moral reasoning — is omitted as irrelevant. Nor is there any explanation of why the student cannot raise money — no reference to banks, or savings, or insurance, or relatives. The case is simple: senseless greed on the part of a callous inventor, and what do you do about it?

Now comes the answer — six choices, and you must pick one; the answers are given in ascending order, the morally lowest first. The lowest is: not to steal the drug (not out of respect for property rights, that doesn’t enter even on the lowest rung of the test, but out of fear of jail). The other five answers all advocate stealing the drug; they differ merely in their reasons. Here are the three most moral reasons, according to the test: “(4) I would steal the drug because I have a duty springing from the marriage vow I took. (5) I would steal the drug because the right to life is higher than the right to property. (6) I would steal the drug because I respect the dignity of human beings. . . . [I should] act in the best interest of mankind.” 19 The wording of the situation and responses is from Christy Hudgins, “Teaching Morality: A Test for the 1970s,” Minneapolis Star, Mar. 26, 1979, p. 3B. The author of the six-stage morality scale is Lawrence Kohlberg.

Here is an eloquent example of what Ayn Rand has amply demonstrated: the creed of self-sacrifice is not concerned with the “dignity of human beings” or with “the best interest of mankind.” This creed is the destroyer of human dignity and of mankind, because it is incompatible with the requirements of human life. It scorns — and dismisses as irrelevant — thought, effort, work, achievement, property, trade, justice, every value life requires. All of this is to be sacrificed, the altruist claims, to that which has the first right on earth: pain, pain as such, weakness, illness, suffering, regardless of its cause. This is the penalization of success for being success and the rewarding of failure for being failure; it is what Ayn Rand called the hatred of the good for being the good; and it is now being taught to our children, courtesy of a Harvard authority, as an example of high-quality moral reasoning. (As to what will happen to the weak and the sick after the able and productive have been demeaned, expropriated, and throttled, read Atlas Shrugged, or look at Soviet Russia.)

Did Ayn Rand exaggerate in saying that altruists wish to sacrifice thought to pain? Let me quote from Dental Products Report magazine in 1982. I do not know first-hand whether this item is true; I hope not. “Some medical schools in the United States are considering major changes in the traditional curriculum requirements for premed and medical students. Harvard, for example, is considering abolishing requirements for premed science and, instead, requiring courses stressing compassion and understanding in dealing with patients.” 20 “Medical Schools May Stress Compassion, Practical Experience,” Nov.–Dec. 1982.

Did you hear that one? Our doctors may not study much science any longer, but they will be skilled in expressing compassion to the suffering — who will suffer permanently, without any chance of relief, because the doctors will no longer be wasting their time on science or thought. This is a perfect, fiction-like example of an altruistic curriculum change, if ever I heard one.

Now let us sum up the total philosophy advocated by today’s colleges: reality has collapsed; reason is naive; achievement is unnecessary and unreal. I sometimes fantasize the ideal modern curriculum, which would capture explicitly the fundamental ideas of the modern university, and recently I found it. I found three actual courses offered at three different schools, one covering each basic branch of philosophy, the sum indicating the naked essence of the modern trend.

For metaphysics, we go to the University of Delaware (Newark) to take an interdisciplinary honors course titled: “Nothing.” Subtitle: “A study of Nil, Void, Vacuum, Null, Zero, and Other Kinds of Nothingness.” The description: “A lecture course exploring the varieties of nothingness from the vacuum and void of physics and astronomy to political nihilism, to the emptiness of the arts and the soul.” 21 Course no. A5 267–80, Spring 1979. That is our metaphysical base, our view of reality: nothing.

For epistemology, we move to New York University to take a course titled “Theory of Knowledge.” The description: “Various theories of knowledge are discussed, including the view that they are all inadequate and that, in fact, nobody knows anything. The consequences of skepticism are explored for thought, action, language, and emotional relations.” 22 Philosophy V83.0083, 1981–82.

We end up, for ethics, at Indiana (Bloomington), taking a course titled: “Social Reactions to Handicaps,” the description of which reads, in part: “This course will . . . explore some of the different ways in which the handicapped individual and the idea of handicap have been regarded in Western Civilization. Figures from the past such as the fool, the madman, the blind beggar, and the witch . . . will be discussed.” 23 Course no. H200, cross-listed as Education F200; date unknown.

There was once a time when college students studied facts, knowledge, and human greatness. Now they study nothingness, ignorance, and the fool, the madman, the blind beggar, and the witch.

If the philosophical message taught by our colleges is clear to you, the political views of the faculties will require very little discussion. Politics is a consequence of philosophy. The precondition of capitalism is egoism, and beneath that: the efficacy of reason. The consequence of unreason and self-sacrifice, by contrast, is this idea: the individual is helpless on his own and has no value anyway, and therefore should merge himself into the group and obey its spokesman, the state. Given today’s basic ideas, in short, the collectivism and statism of the faculties are inevitable — and too obvious to need documentation.

What I do want to mention is the political end result of our current trend. In The Ominous Parallels I argue that the intellectuals are preparing us for a totalitarian dictatorship. This may seem like an exaggeration, so I want to offer one final quote, this one from a philosopher, Richard Rorty, long at Princeton, now at the University of Virginia. Professor Rorty, himself a thorough modern, does not shrink from spelling out the final consequences of the modern skepticism; whatever you think of him, he has the honesty to state his ideas forthrightly. There is no truth, he holds, there is no such subject as philosophy, there are no objective standards by which to evaluate or criticize social and political practices. No matter what is done to the citizens of a country, therefore, they can have no objective grounds on which to protest.

Once, Professor Rorty writes, men could criticize political dictators, at least in their own minds. They could say to the dictator: “‘There is something within you which you are betraying. Though you embody the practices of a totalitarian society which will endure forever, there is something beyond those practices which condemns you.’” Once, he states, we could have said that; but no longer. Now we know that there is no knowledge, no values, no standards. Now we must accept the fact “that we have not once seen the Truth, and so will not, intuitively, recognize it when we see it again. This means that when the secret police come, when the torturers violate the innocent, there is nothing to be said to them.” Professor Rorty, I must add, claims to be disturbed by this result; but he is propagating it vigorously all the same. 24 Richard Rorty, “The Fate of Philosophy,” New Republic, Oct. 18, 1982, p. 33.

Ladies and gentlemen, higher education today has a remarkable press. We hear over and over about the value of our colleges and universities, their importance to the nation, and our need to contribute financially to their survival and growth. In regard to many professional and scientific schools, this is true. But in regard to the arts, the humanities, the social sciences, the opposite is true. In those areas, with some rare exceptions, our colleges and universities are a national menace, and the better the university, such as Harvard and Berkeley and Columbia, the worse it is. Today’s college faculties are hostile to every idea on which this country was founded, they are corrupting an entire generation of students, and they are leading the United States to slavery and destruction.

What is the solution? The only answer to a corrupt philosophy is a rational philosophy, and the only way to spread a rational philosophy is through the universities. The universities today — not the churches any longer, and not the press or TV — are the main transmitters of philosophy; they are what set the tone and direction of a culture. To those of you of college age, therefore, those who do not subscribe to Kant’s philosophy, I want to say that the moral of my remarks is not: quit college. On the contrary, if you are considering college or are already enrolled in one, I urge you to enter or stay, stay and fight the system, by trying to gain a hearing for some other ideas, some pro-American ideas. The colleges pretend to be open to all viewpoints, even though they are not. The only hope is to make them live up to their pretense. If you give up the colleges, you give up any role in the decisive battle for the world, the intellectual battle.

I am not suggesting that you become a martyr, or enter into arguments with professors who will penalize you for your ideas. Not all of them will, however, and I am speaking within the context and limits of rational self-interest. Within that context, I say: speak up when appropriate, let your voice be heard on campus, try to stick it out and obtain your degree, come back to teach if you can get in the door and if that is the lifework you want; and if you are an alumnus, be careful what kind of academic programs you support financially. In this battle, every word, man, and penny counts.

I wish I could tell you that your college years will be a glorious crusade. Actually, they will probably be a miserable experience. If you are a philosophically pro-American student, you have to expect every kind of smear from many of your professors. If you uphold the power of reason, you will be called a fanatic or a dogmatist. If you uphold the right to happiness, you will be called anti-social or even a fascist. If you admire Ayn Rand, you will be called a cultist. You will experience every kind of injustice, and even hatred, and you will be unbelievably bored most of the time, and often you will be alone and lonely. But if you have the courage to venture out into this kind of nightmare, you will not only be acquiring the diploma necessary for your professional future, you will also be helping to save the world, and we are all in your debt.

The young lady who typed this speech said to me at this point: “It’s pretty depressing. Aren’t you going to end on an inspiring note?” I wish I could think of one. Perhaps, someday, Objectivists will start a better university, which would provide a real alternative to the current scene and offer sanctuary to the kind of young minds now being tortured by the Establishment. But this project, though possible, is still far from being a reality.

To those of you in the college trenches today, therefore, I have only a bleak conclusion to offer. And even if I am an atheist, I know no better way to say it: God bless you, and God help you!

The Psychology of Psychologizing

This essay was originally published in the March 1971 issue of The Objectivist and later anthologized in The Voice of Reason: Essays in Objectivist Thought (1989).

In certain passages of Atlas Shrugged, I touched briefly on issues which I wanted to discuss theoretically at a later date and at greater length.

One such passage is the scene in which Hank Rearden, struggling to understand his wife’s behavior, wonders whether the motive of her constant, spiteful sarcasm is “not a desire to make him suffer, but a confession of her own pain, a defense for the pride of an unloved wife, a secret plea — so that the subtle, the hinted, the evasive in her manner, the thing begging to be understood, was not the open malice, but the hidden love.”

Struggling to be just, he gives her the benefit of the doubt and suppresses the warning of his own mind. “He felt a dim anger, like a voice he tried to choke, a voice crying in revulsion: Why should I deal with her rotten, twisted lying? — why should I accept torture for the sake of pity? — why is it I who should have to take the hopeless burden of trying to spare a feeling she won’t admit, a feeling I can’t know or understand or try to guess? — if she loves me, why doesn’t the damn coward say so and let us both face it in the open?”

Rearden was the innocent victim of a widespread game that has many variants and ramifications, none of them innocent, a game that could be called a racket. It consists, in essence, of substituting psychology for philosophy.

Today, many people use psychology as a new form of mysticism: as a substitute for reason, cognition and objectivity, as an escape from the responsibility of moral judgment, both in the role of the judge and the judged.

Mysticism requires the notion of the unknowable, which is revealed to some and withheld from others; this divides men into those who feel guilt and those who cash in on it. The two groups are interchangeable, according to circumstances. When being judged, a mystic cries: “I couldn’t help it!” When judging others, he declares: “You can’t know, but I can.” Modern psychology offers him both opportunities.

Once, the power superseding and defeating man’s mind was taken to be predetermined fate, supernatural will, original sin, etc.; now it is one’s own subconscious. But it is still the same old game: the notion that the wishes, the feelings, the beliefs — and, today, the malfunction — of a human consciousness can absolve a man from the responsibility of cognition.

Just as reasoning, to an irrational person, becomes rationalizing, and moral judgment becomes moralizing, so psychological theories become psychologizing. The common denominator is the corruption of a cognitive process to serve an ulterior motive.

Psychologizing consists in condemning or excusing specific individuals on the grounds of their psychological problems, real or invented, in the absence of or contrary to factual evidence.

As a science, psychology is barely making its first steps. It is still in the anteroom of science, in the stage of observing and gathering material from which a future science will come. This stage may be compared to the pre-Socratic period in philosophy; psychology has not yet found a Plato, let alone an Aristotle, to organize its material, systematize its problems, and define its fundamental principles.

A conscientious psychotherapist, of almost any school, knows that the task of diagnosing a particular individual’s problems is extremely complex and difficult. The same symptom may indicate different things in different men, according to the total context and interaction of their various premises. A long period of special inquiry is required to arrive even at a valid hypothesis.

This does not stop the amateur psychologizers. Armed with a smattering not of knowledge, but of undigested slogans, they rush, unsolicited, to diagnose the problems of their friends and acquaintances. Pretentiousness and presumptuousness are the psychologizer’s invariable characteristics: he not merely invades the privacy of his victims’ minds, he claims to understand their minds better than they do, to know more than they do about their own motives. With reckless irresponsibility, which an old-fashioned mystic oracle would hesitate to match, he ascribes to his victims any motivation that suits his purpose, ignoring their denials. Since he is dealing with the great “unknowable” — which used to be life after death or extrasensory perception, but is now man’s subconscious — all rules of evidence, logic, and proof are suspended, and anything goes (which is what attracts him to his racket).

The harm he does to his victims is incalculable. People who have psychological problems are confused and suggestible; unable to understand their own inner state, they often feel that any explanation is better than none (which is a very grave error). Thus the psychologizer succeeds in implanting new doubts in their minds, augmenting their sense of guilt and fear, and aggravating their problems.

The unearned status of an “authority,” the chance to air arbitrary pronouncements and frighten people or manipulate them, are some of the psychologizer’s lesser motives. His basic motive is worse. Observe that he seldom discovers any virtuous or positive elements hidden in his victims’ subconscious; what he claims to discover are vices, weaknesses, flaws. What he seeks is a chance to condemn — to pronounce a negative moral judgment, not on the grounds of objective evidence, but on the grounds of some intangible, unprovable processes in a man’s subconscious untranslated into action. This means: a chance to subvert morality.

The basic motive of most psychologizers is hostility. Caused by a profound self-doubt, self-condemnation, and fear, hostility is a type of projection that directs toward other people the hatred which the hostile person feels toward himself. Blaming the evil of others for his own shortcomings, he feels a chronic need to justify himself by demonstrating their evil, by seeking it, by hunting for it — and by inventing it. The discovery of actual evil in a specific individual is a painful experience for a moral person. But observe the almost triumphant glee with which a psychologizer discovers some ineffable evil in some bewildered victim.

The psychologizer’s subversion of morality has another, corollary aspect: by assuming the role of a kind of moral Grand Inquisitor responsible for the psychological purity of others, he deludes himself into the belief that he is demonstrating his devotion to morality and can thus escape the necessity of applying moral principles to his own actions.

This is his link to another, more obvious, and, today, more fashionable type of psychologizer who represents the other side of the same coin: the humanitarian cynic. The cynic turns psychology into a new, “scientific” version of determinism and — by means of unintelligible jargon derived from fantastically arbitrary theories — declares that man is ruled by the blind forces of his subconscious, which he can neither know nor control, that he can’t help it, that nobody can help what he does, that nobody should be judged or condemned, that morality is a superstition and anything goes.

This type has many subvariants, ranging from the crude cynic, who claims that innately all men are swine, to the compassionate cynic, who claims that anything must be forgiven and that the substitute for morality is love.

Observe that both types of psychologizers, the Inquisitor and the cynic, switch roles according to circumstances. When the Inquisitor is called to account for some action of his own, he cries: “I couldn’t help it!” When the humanitarian cynic confronts an unforgiving, moral man, he vents as virulent a stream of denunciations, hostility, and hatred as any Inquisitor — forgetting that the moral man, presumably, can’t help it.

The common denominator remains constant: escape from cognition and, therefore, from morality.

Psychologizing is not confined to amateurs acting in private. Some professional psychologists have set the example in public. As an instance of the Inquisitor type of psychologizing, there was the group of psychiatrists who libeled Senator Barry Goldwater [in 1964], permitting themselves the outrageous impertinence of diagnosing a man they had never met. (Parenthetically, Senator Goldwater exhibited a magnificent moral courage in challenging them and subjecting himself to their filthy malice in the ordeal of a trial, which he won. The Supreme Court, properly, upheld the verdict.) [Goldwater v. Ginzburg et al. 396 U.S. 1049] >

As an example of the cynic type of psychologizing, there are the psychologists who rush to the defense of any murderer (such as Sirhan Sirhan), claiming that he could not help it, that the blame rests on society or environment or his parents or poverty or war, etc.

These notions are picked up by amateurs, by psychologizing commentators who offer them as excuses for the atrocities committed by “political” activists, bombers, college-campus thugs, etc. The notion that poverty is the psychological root of all evil is a typical piece of psychologizing, whose proponents ignore the fact that the worst atrocities are committed by the children of the well-to-do.

As examples of eclectic mixtures, there are the psychologizing biographies of historical figures that interpret the motives of men who died centuries ago — by means of a crude, vulgarized version of the latest psychological theories, which are false to begin with. And there are the countless psychologizing movies that explain a murderer’s actions by showing that his domineering mother did not kiss him good night at the age of six — or account for a girl’s frigidity by revealing that she once broke a doll representing her father.

Then there is the renowned playwright who was asked in a television interview why his plays always had unhappy endings, and who answered: “I don’t know. Ask my psychiatrist.”

While the racket of the philosophizing mystics rested on the claim that man is unable to know the external world, the racket of the psychologizing mystics rests on the claim that man is unable to know his own motivation. The ultimate goal is the same: the undercutting of man’s mind.

Psychologizers do not confine themselves to any one school of psychology. They snatch parts of any and all psychological theories as they see fit. They sneak along on the fringes of any movement. They exist even among alleged students of Objectivism.

The psychologizers’ victims are not always innocent or unwilling. The “liberation” from the responsibility of knowing one’s own motives is tempting to many people. Many are eager to switch the burden of judging their own moral stature to the shoulders of anyone willing to carry it. Men who do not accept the judgment of others as a substitute for their own in regard to the external world, turn into abject secondhanders in regard to their inner state. They would not go to a quack for a medical diagnosis of their physical health, but they entrust their mental health to any psychologizer who comes along. The innocent part of their reasons is their failure of introspection and the painful chaos of their psychological conflicts; the non-innocent part is fear of moral responsibility.

Both the psychologizers and their victims ignore the nature of consciousness and of morality.

An individual’s consciousness, as such, is inaccessible to others; it can be perceived only by means of its outward manifestations. It is only when mental processes reach some form of expression in action that they become perceivable (by inference) and can be judged. At this point, there is a line of demarcation, a division of labor, between two different sciences.

The task of evaluating the processes of man’s subconscious is the province of psychology. Psychology does not regard its subject morally, but medically — i.e., from the aspect of health or malfunction (with cognitive competence as the proper standard of health).

The task of judging man’s ideas and actions is the province of philosophy.

Philosophy is concerned with man as a conscious being; it is for conscious beings that it prescribes certain principles of action, i.e., a moral code.

A man who has psychological problems is a conscious being; his cognitive faculty is hampered, burdened, slowed down, but not destroyed. A neurotic is not a psychotic. Only a psychotic is presumed to suffer from a total break with reality and to have no control over his actions or the operations of his consciousness (and even this is not always true). A neurotic retains the ability to perceive reality, and to control his consciousness and his actions (this control is merely more difficult for him than for a healthy person). So long as he is not psychotic, this is the control that a man cannot lose and must not abdicate.

Morality is the province of philosophical judgment, not of psychological diagnosis. Moral judgment must be objective, i.e., based on perceivable, demonstrable facts. A man’s moral character must be judged on the basis of his actions, his statements, and his conscious convictions — not on the basis of inferences (usually spurious) about his subconscious.

A man is not to be condemned or excused on the grounds of the state of his subconscious. His psychological problems are his private concern which is not to be paraded in public and not to be made a burden on innocent victims or a hunting ground for poaching psychologizers. Morality demands that one treat and judge men as responsible adults.

This means that one grants a man the respect of assuming that he is conscious of what he says and does, and one judges his statements and actions philosophically, i.e., as what they are — not psychologically, i.e., as leads or clues to some secret, hidden, unconscious meaning. One neither speaks nor listens to people in code.

If a man’s consciousness is hampered by malfunction, it is the task of a psychologist to help him correct it — just as it is the task of a doctor to help correct the malfunction of a man’s body. It is not the task of an astronaut-trainer or a choreographer to adjust the techniques of space flying or of ballet dancing to the requirements of the physically handicapped. It is not the task of philosophy to adjust the principles of proper action (i.e., of morality) to the requirements of the psychologically handicapped — nor to allow psychologizers to transform such handicaps into a moral issue, one way or the other.

It is not man’s subconscious, but his conscious mind that is subject to his direct control — and to moral judgment. It is a specific individual’s conscious mind that one judges (on the basis of objective evidence) in order to judge his moral character.

Every kind of psychologizing involves the false dichotomy whose extremes are represented by the Inquisitor and the cynic. The alternative is not: rash, indiscriminate moralizing or cowardly, evasive moral neutrality — i.e., condemnation without knowledge or the refusal to know in order not to condemn. These are two interchangeable variants of the same motive: escape from the responsibility of cognition and of moral judgment.

In dealing with people, one necessarily draws conclusions about their characters, which involves their psychology, since every character judgment refers to a man’s consciousness. But it is a man’s subconscious and his psychopathology that have to be left alone, particularly in moral evaluations.

A layman needs some knowledge of medicine in order to know how to take care of his own body — and when to call a doctor. The same principle applies to psychology: a layman needs some knowledge of psychology in order to understand the nature of a human consciousness; but theoretical knowledge does not qualify him for the extremely specialized job of diagnosing the psychopathological problems of specific individuals. Even self-diagnosis is often dangerous: there is such a phenomenon as psychological hypochondriacs, who ascribe to themselves every problem they hear or read about.

Allowing for exceptions in special cases, it is not advisable to discuss one’s psychological problems with one’s friends. Such discussions can lead to disastrously erroneous conclusions (since two amateurs are no better than one, and sometimes worse) — and they introduce a kind of medical element that undercuts the basis of friendship. Friendship presupposes two firm, independent, reliable, and responsible personalities. (This does not mean that one has to lie, put on an act and hide from one’s friends the fact that one has problems; it means simply that one does not turn a friend into a therapist.)

The above applies to psychological discussions between two honest persons. The opportunities such discussions offer to the dishonest are obvious: they are an invitation for every type of psychologizer to pounce upon. The Inquisitor will use them to frighten and manipulate a victim. The cynic will use them to attract attention to himself, to evoke pity, to wheedle special privileges. The old lady who talks about her operation is a well-known bore; she is nothing compared to the youngish lady who talks on and on and on about her psychological problems, with a lameness of imagination that prevents them from being good fiction.

Psychological problems as such are not a disgrace; it is what a person does about them that frequently is.

Since a man’s psychological problems hamper his cognitive judgment (particularly the problems created by a faulty psycho-epistemology), it is his responsibility to delimit his problems as much as possible, to think with scrupulous precision and clarity before taking an action, and never to act blindly on the spur of an emotion (it is emotions that distort cognition in all types of psychological problems). In regard to other men, it is his responsibility to preserve the principle of objectivity, i.e., to be consistent and intelligible in his behavior, and not to throw his neurosis at others, expecting them to untangle it, which none of them can or should do.

This brings us to the lowest type of psychologizing, exemplified by Lillian Rearden.

Though her behavior was a calculated racket, the same policy is practiced by many people, in many different forms, to varying extents, moved by various mixtures of cunning, inertia, and irresponsibility. The common denominator is the conscious flouting of objectivity — in the form of the self-admitted inability and/or unwillingness to explain one’s own actions. The pattern goes as follows: “Why did you do this?” “I don’t know.” “What were you after?” “I don’t know.” “Since I can’t understand you, what do you expect me to do?” “I don’t know.”

This policy rests on the notion that the content of one’s consciousness need not be processed.

It is only a newborn infant that could regard itself as the helplessly passive spectator of the chaotic sensations which are the content of its consciousness (but a newborn infant would not, because its consciousness is intensely busy processing its sensations). From the day of his birth, man’s development and growth to maturity consists in his mastery of the skill of processing his sensory-perceptual material, of organizing it into concepts, of integrating concepts, of identifying his feelings, of discovering their relation to the facts of reality. This processing has to be performed by a man’s own mind. No one can perform it for him. If he fails to perform it, he is mentally defective. It is only on the assumption that he has performed it that one treats him as a conscious being.

The evil of today’s psychologizing culture — fostered particularly by Progressive education — is the notion that no such processing is necessary.

The result is the stupor and lethargy of those who are neither infants nor adults, but miserable sleepwalkers unwilling to wake up. Anything can enter the spongy mess inside their skulls, nothing can come out of it. The signals it emits are chance regurgitations of any chance splatter.

They have abdicated the responsibility for their own mental processes, yet they continue to act, to speak, to deal with people — and to expect some sort of response. This means that they throw upon others the burden of the task on which they defaulted, and expect others to understand the unintelligible.

The number of people they victimize, the extent of the torture they impose on merciful, conscientious men who struggle to understand them, the despair of those whom they drive to the notion that life is incomprehensible and irrational, cannot be computed.

It should not be necessary to say it, but today it is: anyone who wants to be understood, has to make damn sure that he has made himself intelligible.

This is the moral principle that Hank Rearden glimpsed and should have acted upon at once.

It is only with a person’s conscious mind that one can deal, and it is only with his conscious mind that one can be concerned. The unprocessed chaos inside his brain, his unidentified feelings, his unnamed urges, his unformulated wishes, his unadmitted fears, his unknown motives, and the entire cesspool he has made of his stagnant subconscious are of no interest, significance, or concern to anyone outside a therapist’s office.

The visible image of an “unprocessed” mentality is offered by non-objective art. Its practitioners announce that they have failed to digest their perceptual data, that they have failed to reach the conceptual or fully conscious level of development, and that they offer you the raw material of their subconscious, whose mystery is for you to interpret.

There is no great mystery about it.

The mind is a processing organ; so is the stomach. If a stomach fails in its function, it throws up; its unprocessed material is vomit.

So is the unprocessed material emitted by a mind.

The Question of Scholarships

This essay was originally published in the June 1966 issue of The Objectivist and later anthologized in The Voice of Reason: Essays in Objectivist Thought (1989).

Many students of Objectivism are troubled by a certain kind of moral dilemma confronting them in today’s society. [I am] frequently asked the questions: “Is it morally proper to accept scholarships, private or public?” and: “Is it morally proper for an advocate of capitalism to accept a government research grant or a government job?”

I shall hasten to answer: “Yes” — then proceed to explain and qualify it. There are many confusions on these issues, created by the influence and implications of the altruist morality.

1. There is nothing wrong in accepting private scholarships. The fact that a man has no claim on others (i.e., that it is not their moral duty to help him and that he cannot demand their help as his right) does not preclude or prohibit good will among men and does not make it immoral to offer or to accept voluntary, non-sacrificial assistance.

It is altruism that has corrupted and perverted human benevolence by regarding the giver as an object of immolation and the receiver as a helplessly miserable object of pity who holds a mortgage on the lives of others — a doctrine which is extremely offensive to both parties, leaving men no choice but the roles of sacrificial victim or moral cannibal. A man of self-esteem can neither offer help nor accept it on such terms.

As a consequence, when people need help, the best of them (those who need it through no fault of their own) often prefer to starve rather than accept assistance — while the worst of them (the professional parasites) run riot and cash in on it to the full. (For instance, the student “activists’’ who, not satisfied with free education, demand ownership of the university as well.)

To view the question in its proper perspective, one must begin by rejecting altruism’s terms and all of its ugly emotional aftertaste — then take a fresh look at human relationships. It is morally proper to accept help, when it is offered not as a moral duty, but as an act of good will and generosity, when the giver can afford it (i.e., when it does not involve self-sacrifice on his part), and when it is offered in response to the receiver’s virtues, not in response to his flaws, weaknesses, or moral failures, and not on the ground of his need as such.

Scholarships are one of the clearest categories of this proper kind of help. They are offered to assist ability, to reward intelligence, to encourage the pursuit of knowledge, to further achievement — not to support incompetence.

If a brilliant child’s parents cannot send him through college (or if he has no parents), it is not a moral default on their part or his. It is not the fault of “society,” of course, and he cannot demand the right to be educated at someone else’s expense; he must be prepared to work his way through school, if necessary. But this is the proper area for voluntary assistance. If some private person or organization offers to help him, in recognition of his ability, and thus to save him years of struggle — he has the moral right to accept.

The value of scholarships is that they offer an ambitious youth a gift of time when he needs it most: at the beginning.

(The fact that in today’s moral atmosphere, those who give or distribute scholarships are often guilty of injustices and of altruistic motives, does not alter the principle involved. It represents their failure to live up to the principle; their integrity is not the recipient’s responsibility and does not affect his right to accept the scholarship in good faith.)

2. A different principle and different considerations are involved in the case of public (i.e., governmental) scholarships. The right to accept them rests on the right of the victims to the property (or some part of it) which was taken from them by force.

The recipient of a public scholarship is morally justified only so long as he regards it as restitution and opposes all forms of welfare statism. Those who advocate public scholarships have no right to them; those who oppose them have. If this sounds like a paradox, the fault lies in the moral contradictions of welfare statism, not in its victims.

Since there is no such thing as the right of some men to vote away the rights of others, and no such thing as the right of the government to seize the property of some men for the unearned benefit of others — the advocates and supporters of the welfare state are morally guilty of robbing their opponents, and the fact that the robbery is legalized makes it morally worse, not better. The victims do not have to add self-inflicted martyrdom to the injury done to them by others; they do not have to let the looters profit doubly, by letting them distribute the money exclusively to the parasites who clamored for it. Whenever the welfare-state laws offer them some small restitution, the victims should take it.

It does not matter, in this context, whether a given individual has or has not paid an amount of taxes equal to the amount of the scholarship he accepts. First, the sum of his individual losses cannot be computed; this is part of the welfare-state philosophy, which treats everyone’s income as public property. Second, if he has reached college age, he has undoubtedly paid — in hidden taxes — much more than the amount of the scholarship. Or, if his parents cannot afford to pay for his education, consider what taxes they have paid, directly or indirectly, during the twenty years of his life — and you will see that a scholarship is too pitifully small even to be called a restitution.

Third — and most important — the young people of today are not responsible for the immoral state of the world into which they were born. Those who accept the welfare-statist ideology assume their share of the guilt when they do so. But the anti-collectivists are innocent victims who face an impossible situation: it is welfare statism that has almost destroyed the possibility of working one’s way through college. It was difficult but possible some decades ago; today, it has become a process of close-to-inhuman torture. There are virtually no part-time jobs that pay enough to support oneself while going to school; the alternative is to hold a full-time job and to attend classes at night — which takes eight years of unrelenting twelve-to-sixteen-hour days, for a four-year college course. If those responsible for such conditions offer the victim a scholarship, his right to take it is incontestable — and it is too pitifully small an amount even to register on the scales of justice, when one considers all the other, the nonmaterial, nonamendable injuries he has suffered.

The same moral principles and considerations apply to the issue of accepting social security, unemployment insurance, or other payments of that kind. It is obvious, in such cases, that a man receives his own money which was taken from him by force, directly and specifically, without his consent, against his own choice. Those who advocated such laws are morally guilty, since they assumed the “right” to force employers and unwilling coworkers. But the victims, who opposed such laws, have a clear right to any refund of their own money — and they would not advance the cause of freedom if they left their money, unclaimed, for the benefit of the welfare-state administration.

3. The same moral principles and considerations apply to the issue of government research grants.

The growth of the welfare state is approaching the stage where virtually the only money available for scientific research will be government money. (The disastrous effects of this situation and the disgraceful state of government-sponsored science are apparent already, but that is a different subject. We are concerned here only with the moral dilemma of scientists.) Taxation is destroying private resources, while government money is flooding and taking over the field of research.

In these conditions, a scientist is morally justified in accepting government grants — so long as he opposes all forms of welfare statism. As in the case of scholarship recipients, a scientist does not have to add self-martyrdom to the injustices he suffers. And he does not have to surrender science to the Dr. Floyd Ferrises [this refers to a villain in Atlas Shrugged who is a government scientist].

Government research grants, for the most part, have no strings attached, i.e., no controls over the scientist’s intellectual and professional freedom (at least, not yet). When and if the government attempts to control the scientific and/or political views of the recipients of grants, that will be the time for men of integrity to quit. At present, they are still free to work — but, more than any other professional group, they should be on guard against the gradual, insidious growth of pressures to conform and of tacit control-by-intimidation, which are implicit in such conditions.

4. The same moral principles and considerations apply to the issue of taking government jobs.

The growth of government institutions has destroyed an incalculable number of private jobs and opportunities for private employment. This is more apparent in some professions (as, for instance, teaching) than in others, but the octopus of the “public sector” is choking and draining the “private sector” in virtually every line of work. Since men have to work for a living, the opponents of the welfare state do not have to condemn themselves to the self-martyrdom of a self-restricted labor market — particularly when so many private employers are in the vanguard of the advocates and profiteers of welfare statism.

There is, of course, a limitation on the moral right to take a government job: one must not accept any job that demands ideological services, i.e., any job that requires the use of one’s mind to compose propaganda material in support of welfare statism — or any job in a regulatory administrative agency enforcing improper, non-objective laws. The principle here is as follows: it is proper to take the kind of work which is not wrong per se, except that the government should not be doing it, such as medical services; it is improper to take the kind of work that nobody should be doing, such as is done by the F.T.C., the F.C.C., etc.

But the same limitation applies to a man’s choice of private employment: a man is not responsible for the moral or political views of his employers, but he cannot accept a job in an undertaking which he considers immoral, or in which his work consists specifically of violating his own convictions, i.e., of propagating ideas he regards as false or evil.

5. The moral principle involved in all the above issues consists, in essence, of defining as clearly as possible the nature and limits of one’s own responsibility, i.e., the nature of what is or is not in one’s power.

The issue is primarily ideological, not financial. Minimizing the financial injury inflicted on you by the welfare-state laws, does not constitute support of welfare statism (since the purpose of such laws is to injure you) and is not morally reprehensible. Initiating, advocating, or expanding such laws is.

In a free society, it is immoral to denounce or oppose that from which one derives benefits — since one’s associations are voluntary. In a controlled or mixed economy, opposition becomes obligatory — since one is acting under force, and the offer of benefits is intended as a bribe.

So long as financial considerations do not alter or affect your convictions, so long as you fight against welfare statism (and only so long as you fight it) and are prepared to give up any of its momentary benefits in exchange for repeal and freedom — so long as you do not sell your soul (or your vote) — you are morally in the clear. The essence of the issue lies in your own mind and attitude.

It is a hard problem, and there are many situations so ambiguous and so complex that no one can determine what is the right course of action. That is one of the evils of welfare statism: its fundamental irrationality and immorality force men into contradictions where no course of action is right.

The ultimate danger in all these issues is psychological: the danger of letting yourself be bribed, the danger of a gradual, imperceptible, subconscious deterioration leading to compromise, evasion, resignation, submission. In today’s circumstances, a man is morally in the clear only so long as he remains intellectually incorruptible. Ultimately, these problems are a test — a hard test — of your own integrity. You are its only guardian. Act accordingly.

Through Your Most Grievous Fault

This article was originally published in the Los Angeles Times on August 19, 1962, two weeks after Marilyn Monroe’s death. The article was anthologized in The Voice of Reason: Essays in Objectivist Thought (1989) and The Ayn Rand Column (1991 and 1998).

 

The death of Marilyn Monroe shocked people with an impact different from their reaction to the death of any other movie star or public figure. All over the world, people felt a peculiar sense of personal involvement and of protest, like a universal cry of “Oh, no!”

They felt that her death had some special significance, almost like a warning which they could not decipher — and they felt a nameless apprehension, the sense that something terribly wrong was involved.

They were right to feel it.

‘Envy’ is the only name she could find for the monstrous thing she faced, but it was much worse than envy: it was the profound hatred of life, of success and of all human values
Marilyn Monroe on the screen was an image of pure, innocent, childlike joy in living. She projected the sense of a person born and reared in some radiant utopia untouched by suffering, unable to conceive of ugliness or evil, facing life with the confidence, the benevolence, and the joyous self-flaunting of a child or a kitten who is happy to display its own attractiveness as the best gift it can offer the world, and who expects to be admired for it, not hurt.

In real life, Marilyn Monroe’s probable suicide — or worse: a death that might have been an accident, suggesting that, to her, the difference did not matter — was a declaration that we live in a world which made it impossible for her kind of spirit, and for the things she represented, to survive.

If there ever was a victim of society, Marilyn Monroe was that victim — of a society that professes dedication to the relief of the suffering, but kills the joyous.

None of the objects of the humanitarians’ tender solicitude, the juvenile delinquents, could have had so sordid and horrifying a childhood as did Marilyn Monroe.

To survive it and to preserve the kind of spirit she projected on the screen — the radiantly benevolent sense of life, which cannot be faked — was an almost inconceivable psychological achievement that required a heroism of the highest order. Whatever scars her past had left were insignificant by comparison.

She preserved her vision of life through a nightmare struggle, fighting her way to the top. What broke her was the discovery, at the top, of as sordid an evil as the one she had left behind — worse, perhaps, because incomprehensible. She had expected to reach the sunlight; she found, instead, a limitless swamp of malice.

It was a malice of a very special kind. If you want to see her groping struggle to understand it, read the magnificent article in the August 17, 1962, issue of Life magazine. It is not actually an article, it is a verbatim transcript of her own words — and the most tragically revealing document published in many years. It is a cry for help, which came too late to be answered.

“When you’re famous, you kind of run into human nature in a raw kind of way,” she said. “It stirs up envy, fame does. People you run into feel that, well, who is she — who does she think she is, Marilyn Monroe? They feel fame gives them some kind of privilege to walk up to you and say anything to you, you know, of any kind of nature — and it won’t hurt your feelings — like it’s happening to your clothing. . . . I don’t understand why people aren’t a little more generous with each other. I don’t like to say this, but I’m afraid there is a lot of envy in this business.”

“Envy” is the only name she could find for the monstrous thing she faced, but it was much worse than envy: it was the profound hatred of life, of success and of all human values, felt by a certain kind of mediocrity — the kind who feels pleasure on hearing about a stranger’s misfortune. It was hatred of the good for being the good — hatred of ability, of beauty, of honesty, of earnestness, of achievement and, above all, of human joy.

Read the Life article to see how it worked and what it did to her:

An eager child, who was rebuked for her eagerness — “Sometimes the [foster] families used to worry because I used to laugh so loud and so gay; I guess they felt it was hysterical.”
A spectacularly successful star, whose employers kept repeating: “Remember you’re not a star,” in a determined effort, apparently, not to let her discover her own importance.

A brilliantly talented actress, who was told by the alleged authorities, by Hollywood, by the press, that she could not act.

An actress, dedicated to her art with passionate earnestness — “When I was 5 — I think that’s when I started wanting to be an actress — I loved to play. I didn’t like the world around me because it was kind of grim — but I loved to play house and it was like you could make your own boundaries” — who went through hell to make her own boundaries, to offer people the sunlit universe of her own vision — “It’s almost having certain kinds of secrets for yourself that you’ll let the whole world in on only for a moment, when you’re acting” — but who was ridiculed for her desire to play serious parts.

A woman, the only one, who was able to project the glowingly innocent sexuality of a being from some planet uncorrupted by guilt — who found herself regarded and ballyhooed as a vulgar symbol of obscenity — and who still had the courage to declare: “We are all born sexual creatures, thank God, but it’s a pity so many people despise and crush this natural gift.”

A happy child who was offering her achievement to the world, with the pride of an authentic greatness and of a kitten depositing a hunting trophy at your feet — who found herself answered by concerted efforts to negate, to degrade, to ridicule, to insult, to destroy her achievement — who was unable to conceive that it was her best she was punished for, not her worst — who could only sense, in helpless terror, that she was facing some unspeakable kind of evil.

How long do you think a human being could stand it?

That hatred of values has always existed in some people, in any age or culture. But a hundred years ago, they would have been expected to hide it. Today, it is all around us; it is the style and fashion of our century.

Where would a sinking spirit find relief from it?

The evil of a cultural atmosphere is made by all those who share it. Anyone who has ever felt resentment against the good for being the good and has given voice to it, is the murderer of Marilyn Monroe.

Review of Aristotle by John Herman Randall, Jr.

This essay was originally published in The Objectivist Newsletter in May 1963 and later anthologized in The Voice of Reason: Essays in Objectivist Thought (1989).

A version was also delivered as a 28-minute radio address in May 1963.

If there is a philosophical Atlas who carries the whole of Western civilization on his shoulders, it is Aristotle. He has been opposed, misinterpreted, misrepresented, and — like an axiom — used by his enemies in the very act of denying him. Whatever intellectual progress men have achieved rests on his achievements.

Aristotle may be regarded as the cultural barometer of Western history. Whenever his influence dominated the scene, it paved the way for one of history’s brilliant eras; whenever it fell, so did mankind. The Aristotelian revival of the thirteenth century brought men to the Renaissance. The intellectual counterrevolution turned them back toward the cave of his antipode: Plato.

There is only one fundamental issue in philosophy: the cognitive efficacy of man’s mind. The conflict of Aristotle versus Plato is the conflict of reason versus mysticism. It was Plato who formulated most of philosophy’s basic questions — and doubts. It was Aristotle who laid the foundation for most of the answers. Thereafter, the record of their duel is the record of man’s long struggle to deny and surrender or to uphold and assert the validity of his particular mode of consciousness.

Today, philosophy has sunk below the level of Aristotle versus Plato, down to the primitive gropings of Parmenides versus Heraclitus; whose disciples were unable to reconcile the concept of intellectual certainty with the phenomenon of change: the Eleatics, who claimed that change is illogical, that in any clash between mind and reality, reality is dispensable and, therefore, change is an illusion — versus the Heraclitean Sophists, who claimed that mind is dispensable, that knowledge is an illusion and nothing exists but change. Or: consciousness without existence versus existence without consciousness. Or: blind dogmatism versus cynical subjectivism. Or: Rationalism versus Empiricism.

Aristotle was the first man who integrated the facts of identity and change, thus solving that ancient dichotomy. Or rather, he laid the foundation and indicated the method by which a full solution could be reached. In order to resurrect that dichotomy thereafter, it was necessary to ignore and evade his works. Ever since the Renaissance, the dichotomy kept being resurrected, in one form or another, always aimed at one crucial target: the concept of identity — always leading to some alleged demonstration of the deceptiveness, the limitations, the ultimate impotence of reason.

It took several centuries of misrepresenting Aristotle to turn him into a straw man, to declare the straw man invalidated, and to release such a torrent of irrationality that it is now sweeping philosophy away and carrying us back past the pre-Socratics, past Western civilization, into the prehistorical swamps of the Orient, via Existentialism and Zen Buddhism.

Today, Aristotle is the forgotten man of philosophy. Slick young men go about droning the wearisome sophistries of the fifth century B.C., to the effect that man can know nothing, while unshaven young men go about chanting that they do know by means of their whole body from the neck down.

It is in this context that one must evaluate the significance of an unusual book appearing on such a scene — Aristotle by John Herman Randall, Jr.

Let me hasten to state that the above remarks are mine, not Professor Randall’s. He does not condemn modern philosophy as it deserves — he seems to share some of its errors. But the theme of his book is the crucial relevance and importance of Aristotle to the philosophical problems of our age. And his book is an attempt to bring Aristotle’s theories back into the light of day — of our day — from under the shambles of misrepresentation by medieval mystics and by modern Platonists.

“Indeed,” he writes, “[Aristotle’s] may well be the most passionate mind in history: it shines through every page, almost every line. His crabbed documents exhibit, not ‘cold thought,’ but the passionate search for passionless truth. For him, there is no ‘mean,’ no moderation, in intellectual excellence. The ‘theoretical life’ is not for him the life of quiet ‘contemplation,’ serene and unemotional, but the life of nous, of theoria, of intelligence, burning, immoderate, without bounds or limits.”

Indicating that the early scientists had discarded Aristotle in rebellion against his religious interpreters, Professor Randall points out that their scientific achievements had, in fact, an unacknowledged Aristotelian base and were carrying out the implications of Aristotle’s theories.

Blaming the epistemological chaos of modern science on the influence of Newton’s mechanistic philosophy of nature, he writes:

It is fascinating to speculate how, had it been possible in the seventeenth century to reconstruct rather than abandon Aristotle, we might have been saved several centuries of gross confusion and error. . . . Where we are often still groping, Aristotle is frequently clear, suggestive, and fruitful. This holds true of many of his analyses: his doctrine of natural teleology; his view of natural necessity as not simple and mechanical but hypothetical; his conception of the infinite as potential, not actual; his notion of a finite universe; his doctrine of natural place; his conception of time as not absolute, but rather a dimension, a system of measurement; his conception that place is a coordinate system, and hence relative. On countless problems, from the standpoint of our present theory, Aristotle was right, where the nineteenth-century Newtonian physicists were wrong.

Objecting to “the structureless world of Hume in which ‘anything may be followed by anything,’” Professor Randall writes:

To such a view, which he found maintained by the Megarians, Aristotle answers, No! Every process involves the operation of determinate powers. There is nothing that can become anything else whatsoever. A thing can become only what it has the specific power to become, only what it already is, in a sense, potentially. And a thing can be understood only as that kind of thing that has that kind of a specific power; while the process can be understood only as the operation, the actualization, the functioning of the powers of its subject or bearer.

To read a concise, lucid presentation of Aristotle’s system, written by a distinguished modern philosopher — written in terms of basic principles and broad fundamentals, as against the senseless “teasing” of trivia by today’s alleged thinkers — is so rare a value that it is sufficient to establish the importance of Professor Randall’s book, in spite of its flaws.

Its flaws, unfortunately, are numerous. Professor Randall describes his book as “a philosopher’s delineation of Aristotle.” Since there are many contradictory elements and many obscure passages in Aristotle’s own works (including, in some cases, the question of their authenticity), it is a philosopher’s privilege (within demonstrable limits) to decide which strands of a badly torn fabric he chooses to present as significantly “Aristotelian.” But nothing — particularly not Aristotle — is infinite and indeterminate. And while Professor Randall tries to separate his presentation from his interpretation, he does not always succeed. Some of his interpretations are questionable: some are stretched beyond the limit of the permissible.

For instance, he describes Aristotle’s approach to knowledge as follows: “Knowing is for him an obvious fact. . . . The real question, as he sees it, is, ‘In what kind of a world is knowing possible?’ What does the fact of knowing imply about our world?” This is a form of “the prior certainty of consciousness” — the notion that one can first possess knowledge and then proceed to discover what that knowledge is of, thus making the world a derivative of consciousness — a Cartesian approach which would have been inconceivable to Aristotle and which Professor Randall himself is combating throughout his book.

Most of the book’s flaws come from the same root: from Professor Randall’s inability or unwillingness to break with modern premises, methods and terminology. The perceptiveness he brings to his consideration of Aristotle’s ideas, seems to vanish whenever he attempts to equate Aristotle with modern trends. To claim, as he does, that: “In modern terms, Aristotle can be viewed as a behaviorist, an operationalist, and a contextualist” (and, later, as a “functionalist” and a “relativist”), is either inexcusable or so loosely generalized as to rob those terms of any meaning.

Granted that those terms have no specific definitions and are used, like most of today’s philosophical language, in the manner of “mobiles” which connote, rather than denote — even so, their accepted “connotations” are so anti-Aristotelian that one is forced, at times, to wonder whether Professor Randall is trying to put something over on the moderns or on Aristotle. There are passages in the book to support either hypothesis.

On the one hand, Professor Randall writes: “That we can know things as they are, that such knowledge is possible, is the fact that Aristotle is trying to explain, and not, like Kant and his followers, trying to deny and explain away.” And: “Indeed, any construing of the fact of ‘knowledge,’ whether Kantian, Hegelian, Deweyan, Positivistic, or any other, seems to be consistent and fruitful, and to avoid the impasses of barren self-contradiction, and insoluble and meaningless problems, only when it proceeds from the Aristotelian approach, and pushes Aristotle’s own analyses further . . . only, that is, in the measure that it is conducted upon an Aristotelian basis.” (Though one wonders what exactly would be left of Kant, Hegel, Dewey, or the Positivists if they were stripped of their non-Aristotelian elements.)

On the other hand, Professor Randall seems to turn Aristotle into some foggy combination of a linguistic analyst and a Heraclitean, as if language and reality could be understood as two separate, unconnected dimensions — in such passages as: “When [Aristotle] goes on to examine what is involved in ‘being’ anything . . . he is led to formulate two sets of distinctions: the one set appropriate to understanding any ‘thing’ or ousia as a subject of discourse, the other set appropriate to understanding any ‘thing’ or ousia as the outcome of a process, as the operation or functioning of powers, and ultimately as sheer functioning, activity.”

It is true that Aristotle holds the answer to Professor Randall’s “structuralism-functionalism” dichotomy and that his answer is vitally important today. But his answer eliminates that dichotomy altogether — and one cannot solve it by classifying him as a “functionalist” who believed that things are “sheer process.”

The best parts of Professor Randall’s book are Chapters VIII, IX, and XI, particularly this last. In discussing the importance of Aristotle’s biological theory and “the biological motivation of Aristotle’s thought,” he brings out an aspect of Aristotle which has been featured too seldom in recent discussions and which is much more profound than the question of Aristotle’s “functionalism”: the central place given to living entities, to the phenomenon of life, in Aristotle’s philosophy.

For Aristotle, life is not an inexplicable, supernatural mystery, but a fact of nature. And consciousness is a natural attribute of certain living entities, their natural power, their specific mode of action — not an unaccountable element in a mechanistic universe, to be explained away somehow in terms of inanimate matter, nor a mystic miracle incompatible with physical reality, to be attributed to some occult source in another dimension. For Aristotle, “living” and “knowing” are facts of reality; man’s mind is neither unnatural nor supernatural, but natural — and this is the root of Aristotle’s greatness, of the immeasurable distance that separates him from other thinkers.

Life — and its highest form, man’s life — is the central fact in Aristotle’s view of reality. The best way to describe it is to say that Aristotle’s philosophy is “biocentric.”

This is the source of Aristotle’s intense concern with the study of living entities, the source of the enormously “pro-life” attitude that dominates his thinking. In some oddly undefined manner, Professor Randall seems to share it. This, in spite of all his contradictions, seems to be his real bond with Aristotle.

“Life is the end of living bodies,” writes Professor Randall, “since they exist for the sake of living.” And: “No kind of thing, no species is subordinated to the purposes and interests of any other kind. In biological theory, the end served by the structure of any specific kind of living thing is the good — ultimately, the ’survival’ — of that kind of thing.” And, discussing the ends and conclusions of natural processes: “Only in human life are these ends and conclusions consciously intended, only in men are purposes found. For Aristotle, even God has no purpose, only man!”

The blackest patch in this often illuminating book is Chapter XII, which deals with ethics and politics. Its contradictions are apparent even without reference to Aristotle’s text. It is astonishing to read the assertion: “Aristotle’s ethics and politics are actually his supreme achievement.” They are not, even in their original form — let alone in Professor Randall’s version, which transforms them into the ethics of pragmatism.

It is shocking to read the assertion that Aristotle is an advocate of the “welfare state.” Whatever flaws there are in Aristotle’s political theory — and there are many — he does not deserve that kind of indignity.

Professor Randall, who stresses that knowledge must rest on empirical evidence, should take cognizance of the empirical fact that throughout history the influence of Aristotle’s philosophy (particularly of his epistemology) has led in the direction of individual freedom, of man’s liberation from the power of the state — that Aristotle (via John Locke) was the philosophical father of the Constitution of the United States and thus of capitalism — that it is Plato and Hegel, not Aristotle, who have been the philosophical ancestors of all totalitarian and welfare states, whether Bismarck’s, Lenin’s, or Hitler’s.

An “Aristotelian statist” is a contradiction in terms — and this, perhaps, is a clue to the conflict that mars the value of Professor Randall’s book.

But if read critically, this book is of great value in the study of Aristotle’s philosophy. It is a concise and comprehensive presentation which many people need and look for, but cannot find today. It is of particular value to college students: by providing a frame of reference, a clear summary of the whole, it will help them to grasp the meaning of the issues through the fog of the fragmentary, unintelligible manner in which most courses on Aristotle are taught today.

Above all, this book is important culturally, as a step in the right direction, as a recognition of the fact that the great physician needed by our dying science of philosophy is Aristotle — that if we are to emerge from the intellectual shambles of the present, we can do it only by means of an Aristotelian approach.

“Clearly,” writes Professor Randall, “Aristotle did not say everything; though without what he first said, all words would be meaningless, and when it is forgotten they usually are.”

Our Cultural Value-Deprivation

This essay was first published in the April 1966 issue of The Objectivist and later anthologized in The Voice of Reason: Essays in Objectivist Thought (1989).

A 55-minute lecture version was delivered in April 1966 at Boston’s Ford Hall Forum.

In the years 1951 to 1954, a group of scientists at McGill University conducted a series of experiments that attracted a great deal of attention, led to many further inquiries, and became famous under the general title of “sensory deprivation.”

The experiments consisted of observing the behavior of a man in conditions of isolation which eliminated or significantly reduced the sensations of sight, hearing, and touch. The subject was placed in a small, semi-sound-proofed cubicle, he wore translucent goggles which admitted only a diffuse light, he wore heavy gloves and cardboard cuffs over his hands, and he lay in bed for two to three days, with a minimum of motion.

The results varied from subject to subject, but certain general observations could be made: the subjects found it exceedingly difficult or impossible to concentrate, to maintain a systematic process of thought; they lost their sense of time, they felt disoriented, dissociated from reality, unable to tell the difference between sleeping and waking; many subjects experienced hallucinations. Most of them spoke of feeling as if they were losing control of their consciousness. These effects disappeared shortly after the termination of the experiments.

The scientists pursuing these inquiries state emphatically that no theoretical conclusions can yet be drawn from these and other, similar experiments, because they involve too many variables, as well as undefined differences in the psychological character of the subjects, which led to significant differences in their reactions. But certain general indications can be observed: the experiments seem to indicate that man’s consciousness requires constant activity, a constant stream of changing sensory stimuli, and that monotony or insufficient stimulation impairs its efficiency.

Even though man ignores and, to a large extent, shuts out the messages of his senses when he is concentrating on some specific intellectual task — his senses are his contact with reality, that contact is not stagnant, but is maintained by a constant active process, and when that process is slowed down artificially to subnormal levels, his mind slows down as well.

Man’s consciousness is his least known and most abused vital organ. Most people believe that consciousness as such is some sort of indeterminate faculty which has no nature, no specific identity, and, therefore, no requirements, no needs, no rules for being properly or improperly used. The simplest example of this belief is people’s willingness to lie or cheat, to fake reality on the premise that “I’m the only one who’ll know” or “It’s only in my mind” — without any concern for what this does to one’s mind, what complex, untraceable, disastrous impairments it produces, what crippling damage may result.

The loss of control over one’s consciousness is the most terrifying of human experiences: a consciousness that doubts its own efficacy is in a monstrously intolerable state. Yet men abuse, subvert, and starve their consciousness in a manner they would not dream of applying to their hair, toenails, or stomachs. They know that these things have a specific identity and specific requirements, and if one wishes to preserve them, one must comb one’s hair, trim one’s toenails, and refrain from swallowing rat poison. But one’s mind? Aw, it needs nothing and can swallow anything. Or so most people believe. And they go on believing it while they toss in agony on a psychologist’s couch, screaming that their mind keeps them in a state of chronic terror for no reason whatever.

One valuable aspect of the sensory-deprivation experiments is that they call attention to and dramatize a fact which neither laymen nor psychologists are willing fully to accept: the fact that man’s consciousness possesses a specific nature with specific cognitive needs, that it is not infinitely malleable and cannot be twisted, like a piece of putty, to fit any private evasions or any public “conditioning.”

If sensory deprivation has such serious consequences, what are the consequences of “conceptual deprivation”? This is a question untouched by psychologists, so far, since the majority of today’s psychologists do not recognize the significance of the fact that man’s consciousness requires a conceptual mode of functioning — that thinking is the process of cognition appropriate to man. The ravages of “conceptual deprivation” can be observed all around us. Two interacting aspects of this issue must be distinguished: the primary cause is individual, but the contributory cause is social.

The choice to think or not is volitional. If an individual’s choice is predominantly negative, the result is his self-arrested mental development, a self-made cognitive malnutrition, a stagnant, eroded, impoverished, anxiety-ridden inner life. A social environment can neither force a man to think nor prevent him from thinking. But a social environment can offer incentives or impediments; it can make the exercise of one’s rational faculty easier or harder; it can encourage thinking and penalize evasion or vice versa. Today, our social environment is ruled by evasion — by entrenched, institutionalized evasion — while reason is an outcast and almost an outlaw.

The brashly aggressive irrationality and anti-rationality of today’s culture leaves an individual in an intellectual desert. He is deprived of conceptual stimulation and communication; he is unable to understand people or to be understood. He is locked in the equivalent of an experimental cubicle — only that cubicle is the size of a continent — where he is given the sensory stimulation of screeching, screaming, twisting, jostling throngs, but is cut off from ideas: the sounds are unintelligible, the motions incomprehensible, the pressures unpredictable. In such conditions, only the toughest intellectual giants will preserve the unimpaired efficiency of their mind, at the price of an excruciating effort. The rest will give up — usually, in college — and will collapse into hysterical panic (the “activists”) or into sluggish lethargy (the consensus-followers); and some will suffer from conceptual hallucinations (the existentialists).

The subject of “conceptual deprivation” is too vast to cover in one lecture and can merely be indicated. What I want to discuss today is one particular aspect of it: the question of value-deprivation.

A value is that which one acts to gain and/or keep. Values are the motivating power of man’s actions and a necessity of his survival, psychologically as well as physically.

Man’s values control his subconscious emotional mechanism, which functions like a computer adding up his desires, his experiences, his fulfillments and frustrations — like a sensitive guardian watching and constantly assessing his relationship to reality. The key question which this computer is programmed to answer is: What is possible to me?

There is a certain similarity between the issue of sensory perception and the issue of values. Discussing “The Cognitive Consequences of Early Sensory Deprivation,” Dr. Jerome S. Brunet writes: “One may suggest that one of the prime sources of anxiety is a state in which one’s conception or perception of the environment with which one must deal does not ‘fit’ or predict that environment in a manner that makes action possible.’’ [Sensory Deprivation, a symposium at Harvard Medical School, edited by Philip Solomon et al., Cambridge: Harvard University Press, 1961.] If severe and prolonged enough, the absence of a normal, active flow of sensory stimuli may disintegrate the complex organization and the interdependent functions of man’s consciousness.

Man’s emotional mechanism works as the barometer of the efficacy or impotence of his actions. If severe and prolonged enough, the absence of a normal, active flow of value experiences may disintegrate and paralyze man’s consciousness — by telling him that no action is possible.

The form in which man experiences the reality of his values is pleasure.

[An essay from The Virtue of Selfishness on “The Psychology of Pleasure” states,] “Pleasure, for man, is not a luxury, but a profound psychological need. Pleasure (in the widest sense of the term) is a metaphysical concomitant of life, the reward and consequence of successful action — just as pain is the insignia of failure, destruction, death. . . . The state of enjoyment gives [man] a direct experience of his own efficacy, of his competence to deal with the facts of reality, to achieve his values, to live. . . . As pleasure emotionally entails a sense of efficacy, so pain emotionally entails a sense of impotence. In letting man experience, in his own person, the sense that life is a value and that he is a value, pleasure serves as the emotional fuel of man’s existence.”

Where — in today’s culture — can a man find any values or any meaningful pleasure?

If a man holds a rational, or even semi-rational, view of life, where can he find any confirmation of it, any inspiring or encouraging phenomena?

A chronic lack of pleasure, of any enjoyable, rewarding or stimulating experiences, produces a slow, gradual, day-by-day erosion of man’s emotional vitality, which he may ignore or repress, but which is recorded by the relentless computer of his subconscious mechanism that registers an ebbing flow, then a trickle, then a few last drops of fuel — until the day when his inner motor stops and he wonders desperately why he has no desire to go on, unable to find any definable cause of his hopeless, chronic sense of exhaustion.

Yes, there are a few giants of spiritual self-sufficiency who can withstand even this. But this is too much to ask or to expect of most people, who are unable to generate and to maintain their own emotional fuel — their love of life — in the midst of a dead planet or a dead culture. And it is not an accident that this is the kind of agony — death by value-strangulation — that a culture dominated by alleged humanitarians imposes on the millions of men who need its help.

A peculiarity of certain types of asphyxiation — such as death from carbon monoxide — is that the victims do not notice it: the fumes leave them no awareness of their need of fresh air. The specific symptom of value-deprivation is a gradual lowering of one’s expectations. We have already absorbed so much of our cultural fumes that we take the constant pressure of irrationality, injustice, corruption and hooligan tactics for granted, as if nothing better could be expected of life. It is only in the privacy of their own mind that men scream in protest at times — and promptly stifle the scream as “unrealistic” or “impractical.” The man to whom values have no reality any longer — the man or the society that regards the pursuit of values, of the good, as impractical — is finished psychologically.

If, subconsciously, incoherently, inarticulately, men are still struggling for a breath of fresh air — where would they find it in today’s cultural atmosphere?

The foundation of any culture, the source responsible for all of its manifestations, is its philosophy. What does modern philosophy offer us? Virtually the only point of agreement among today’s leading philosophers is that there is no such thing as philosophy — and that this knowledge constitutes their claim to the title of philosophers. With a hysterical virulence, strange in advocates of skepticism, they insist that there can be no valid philosophical systems (i.e., there can be no integrated, consistent, comprehensive view of existence) — that there are no answers to fundamental questions — there is no such thing as truth — there is no such thing as reason, and the battle is only over what should replace it: “linguistic games” or unbridled feelings?

An excellent summary of the state of modern philosophy was offered in Time (January 7, 1966).

Philosophy dead? It often seems so. In a world of war and change, of principles armed with bombs and technology searching for principles, the alarming thing is not what philosophers say but what they fail to say. When reason is overturned, blind passions are rampant, and urgent questions mount, men turn for guidance to . . . almost anyone except their traditional guide, the philosopher. . . . Contemporary philosophy looks inward at its own problems rather than outward at men, and philosophizes about philosophy, not about life.

And further:

For both movements [the analytic and the existentialist], a question such as ‘What is truth?’ becomes impossible to answer. The logical positivist would say that a particular statement of fact can be declared true or false by empirical evidence; anything else is meaningless. A language philosopher would content himself with analyzing all the ways the word true can be used. The existentialist would emphasize what is true for a person in a particular situation.

What, then, are modern philosophers busy doing? “Laymen glancing at the June 10, 1965, issue of the Journal of Philosophy will find a brace of learned analysts discussing whether the sentence ‘There are brown things and there are cows’ is best expressed by the formula (Ǝx) Exw • (Ǝx) Exy or by (Ǝx) Bx • (Ǝx) Cx.

If, in spite of this, someone might still hope to find something of value in modern philosophy, he will be told off explicitly.

A great many of his colleagues in the U.S. today would agree with Donald Kalish, chairman of the philosophy department at U.C.L.A., who says: “There is no system of philosophy to spin out. There are no ethical truths, there are just clarifications of particular ethical problems. Take advantage of these clarifications and work out your own existence. You are mistaken to think anyone ever had the answers. There are no answers. Be brave and face up to it.”

This means that to look for ethical truths (for moral principles or values) is to be a coward — and that bravery consists of dispensing with ethics, truth, values, and of acting like a drunken driver or like the mobs that riot in the streets of the cities throughout the world.

If men seek guidance, the very motive that draws them to philosophy — the desire to understand — makes them give it up. And, along with philosophy, a man gives up the ambitious eagerness of his mind, the quest for knowledge, the cleanliness of certainty. He shrinks the range of his vision, lowers his expectations and his eyes, and moves on, watching the small square of his immediate steps, never raising his head again. He had looked for intellectual values; the emotion of contempt and revulsion was all he found.

If anyone attempts to turn from philosophy to religion, he will find the situation still worse. When religious leaders form a new movement under a slogan such as “God is dead,” there is no lower place to go in terms of cynical obfuscation.

“Theologian Calls ‘God-Talk’ Irrelevant,” announces a headline in The New York Times of November 21, 1965. What sort of talk is relevant is not made clear in the accompanying story which is closer to double-talk than to any other linguistic category — as may be judged from the following quotations: “Even if there once was a God, they say, He is no longer part of human experience, and hence ‘God-talk’ is both meaningless and irrelevant in the contemporary situation.” And: “The function of religion is not to overcome the realities of evil, hopelessness and anguish with an apocalyptic vision, but to equip people to live with these problems and to share them through the religious community.”

Does this mean: not to oppose, not to resist, but to share “evil, hopelessness and anguish”? Your guess is as good as mine.

From a report on a television discussion in Denver, Colorado, I gather that one member of this movement has made its goal and meaning a little clearer. “God,” he said, “is a process of creative social intercourse.”

This, I submit, is obscene. I, who am an atheist, am shocked by so brazen an attempt to rob religion of whatever dignity and philosophical intention it might once have possessed. I am shocked by so cynically enormous a degree of contempt for the intelligence and the sensibility of people, specifically of those intended to be taken in by the switch.

Now, if men give up all abstract speculation and turn to the immediate conditions of their existence — to the realm of politics — what values or moral inspiration will they find?

There is a popular saying that alcohol and gasoline don’t mix. Morality and cynicism are as deadly a mixture. But a political system that mixes freedom and controls will try to mix anything — with the same kind of results on the dark roads of men’s spirit.

On the one hand, we are drenched in the slick, stale, sticky platitudes of altruism, an overripe altruism running amok, pouring money, blood, and slogans about global welfare, which everyone drips and no one hears any longer, since monotony — in moral, as well as sensory, deprivation — deadens perception. On the other hand, we all know and say and read in the same newspapers that all these welfare projects are merely a cynical power game, the game of buying votes with public funds, of paying off “election debts” to pressure groups, and of creating new pressure groups to pay off since the sole purpose of political power, people tacitly believe, is to keep oneself in power, and the sole recourse of the citizens is to gang up on one another and maneuver for who’ll get sacrificed to whom.

The first makes the second possible: altruism gives people an excuse to put up with it. Altruism serves as the veneer — a fading, cracking, peeling veneer — to hide from themselves the terror of their actual belief: that there are no moral principles, that morality is impotent to affect the course of their existence, that they are blind brutes caught in a charnel house and doomed to destruction.

No one believes the political proclamations of our day; no one opposes them. There is no public policy, no ideology, no goals, no convictions, no moral fire, no crusading spirit — nothing but the quiet panic of clinging to the status quo, with the dread of looking back to check the start of the road, with terror of looking ahead to check its end, and with a leadership whose range of vision is shrinking down to the public poll the day after tomorrow’s television appearance.

Promises? “Don’t remind us of promises, that was yesterday, it’s too late.” Results? “Don’t expect results, it’s too soon.” Costs? “Don’t think in terms of old-fashioned economics — the more we spend, the richer we’ll get.” Principles? “Don’t think in terms of old-fashioned labels — we’ve got a consensus.” The future? “Don’t think.”

Whatever public images President Johnson may project, a moral crusader is not one of them. This lends special significance — and a typical whiff of today’s cultural atmosphere — to a column entitled “President Johnson’s Dreams” by James Reston, in The New York Times (February 25, 1966).

Though his reach may exceed his grasp, it has to be said for him that he is a yearner after great ideals. . . . He makes the New Deal seem like a grudging handout. . . . Nothing is beyond his aspirations. Roosevelt’s Vice President, Henry Wallace, was condemned as a visionary because he wanted to give every Hottentot a quart of milk. Humphrey came back talking as if he wanted to send them all to college, and the President’s message in New York was that the Four Freedoms can never be secure in America if they are violated elsewhere in the world. This is not mere speechmaking to Lyndon Johnson. . . . He remains a believer in an unbelieving and cynical world. . . . He is out to eliminate poverty in America. Without any doubt, he feels he can bring adequate education to the multitude, and his confidence goes beyond the boundaries of the nation. Never mind that the British and the French let him know this week that they were reducing their commitments in the world; he sees a combination of American power and generosity dealing somehow with the problem. Has Malthus become as great a menace as Marx? Are the death rate and the birth rate too high? He has programs for them all . . . He looked troubled and sounded harried in New York, and no wonder, for he is bearing all the dreams and lost causes of the century.

Ask yourself: what is the moral and intellectual state of a nation that gives a blank check on its wealth, its work, its efforts, its lives to a “yearner” and “dreamer,” to spend on lost causes?

Can anyone feel morally inspired to live and work for such a purpose?

Can anyone preserve any values by looking at anything today? If a man who earns his living hears constant denunciations of his “selfish greed” and then, as a moral example, is offered the spectacle of the War on Poverty — which fills the newspapers with allegations of political favoritism, intrigues, maneuvering, corruption among its “selfless” administrators — what will happen to his sense of honesty? If a young man struggles sixteen hours a day to work his way through school, and then has to pay taxes to help the dropouts from the dropout programs — what will happen to his ambition? If a man saves for years to build a home, which is then seized by the profiteers of Urban Renewal because their profits are “in the public interest,” but his are not — what will happen to his sense of justice? If a miserable little private holdup man is hauled off to jail, but when the government forces men into a gang big enough to be called a union and they hold up New York City, they get away with it — what will happen to the public’s respect for the law?

Can anyone wish to give his life to defend the rights of South Vietnam — when the rights of Poland, Latvia, Lithuania, Estonia, Czechoslovakia, Yugoslavia, Albania, East Germany, North Korea, Katanga, Cuba, and Hungary were not defended? Can anyone wish to uphold the honor of our treaty obligations in South Vietnam when it was not upheld on the construction site of the wall in Berlin? Can anyone acquire intellectual integrity by observing that it is the collectivists who take a moral stand against the draft, in defense of individual rights — while the so-called “conservatives” insist that young men must be drafted and sent to die in jungle swamps, in order that the South Vietnamese may hold a “democratic” election and vote themselves into communism, if they so choose?

The next time you hear about a crazed gang of juvenile delinquents, don’t look for such explanations as “slum childhood,” “economic underprivilege,” or “parental neglect.” Look at the moral atmosphere of the country, at the example set by their elders and by their public leaders.

Today, the very motive that arouses men’s interest in politics — their sense of responsibility — makes them give it up. And along with politics a man gives up his good will toward people, his benevolence, his openness, his fairness. He withdraws into the small, tight, windowless cellar of his range-of-the-moment concerns, shrinking from any human contact, convinced that the rule of the game is to kill or be killed and that the only action possible to him is to defend himself against every passerby. He had looked for social values; the emotion of contempt and revulsion was all he found.

In the decadent eras of history, in the periods when human hopes and values were collapsing, there was, as a rule, one realm to which men could turn for support, to preserve their image of man, their vision of life’s better possibilities, and their courage. That realm was art.

Let us take a look at the art of our age.

While preparing this discussion, I picked up at random the Sunday Book Review section of The New York Times of March 20, 1966. I shall quote from the three leading reviews of current fiction.

1. “In his new book, it is as if [the author] has taken hold of his flaws, weaknesses, errors, and indulgences, and instead of dealing strictly with them, has made them the subject of his esthetic intention. The scatology has hit the fan. When homosexual camp has become a cliché, he tries to make it new by poking it at the reader from every direction. . . . There are floating neon images of decay, corruption, putrefaction, illness.” This is not a negative review, but an admiringly reproachful one: the reviewer does not like this particular novel, but he extols the author’s talent and urges him to do better. As he puts it: “Give us this day our daily horror, agreed; but carry through on your promises.”

2. The second review is of the same order: respectfully admiring toward the author, but critical of the particular novel under discussion. “It’s hard, bright, and as cold as a block of ice. Gratuitous evil, upholstered innocence, and insane social arrangements condemn [the author’s] characters to frightful violence. They must do or be done to. Under sentence, they move inexorably toward futility and destruction. . . . Three people are murdered during a wave of private crime in the West Indies. One of the murderers earns $100,000. The chief engineer of the bizarre electrochemical derangement of two of the prey collects a lifetime of compensation for a lousy childhood. The victims burn up, get shot or pushed down a thousand-foot ravine. It’s a total dark victory. One can infer positive values only by their absence. The author’s own attitude is as antimoral as a tombstone.”

3. The third review is enthusiastic about a novel which it describes as “remarkable as a rare instance of pornography sublimed to purest art.” The content of the novel is indicated as follows: “The story gradually opens out into a Daedalian maze of perverse relationships — a clandestine society of sinister formality and elegance where the primary bond is mutual complicity in dedication to the pleasures of sadism and masochism. [The heroine] is initiated into this world by her lover, who one day takes her to a secluded mansion where she is trained through the discipline of chains and whip to be totally submissive to the men who are her masters. . . . During her subsequent progress, she is subjected to every sort of sexual debasement and torture, only to be returned in the penultimate stage of her education to a still more brutal institution, a ‘gynaceum’ where she not only endures the cruelest torments but begins to fulfill the sadistic lesbian underside of her own nature.” The theme of this book, according to the reviewer, is: “a perversion of the Christian mystery of exaltation through debasement, of the extremity of suffering transformed into an ultimate victory over the limitations of being.”

If one turns from that muck to the visual arts, one finds the same sewer in somewhat different forms. To the extent that they communicate anything at all, the visual arts are ruled by a single principle: distortion. Distortion of perspective, of space, of shape, of color, and, above all, of the human figure. We are surrounded by images of distorted, dismembered, disintegrated human bodies — such as might be drawn by a retarded five-year-old — and they pursue us everywhere: on subway ads, in fashion magazines, in TV commercials, or suspended on chains over our heads in fashionable concert halls.

There is also the nonrepresentational — or Rorschach — school of art, consisting of blobs, swirls, and smears which are and aren’t, which are anything you might want them to be provided you stare at them long enough, keeping your eyes and mind out of focus. Provided also you forget that the Rorschach test was devised to detect mental illness.

If one were to look for the purpose of that sort of stuff, the kindest thing to say would be that the purpose is to take in the suckers and provide a field day for pretentious mediocrities. But if one looked deeper, one would find something much worse: the attempt to make you doubt the evidence of your senses and the sanity of your mind.

Art is a selective re-creation of reality according to an artist’s metaphysical value judgments. Observe what image of man, of life and of reality modern art infects people with — particularly the young whose first access to a broad view of existence and first source of values lie in the realm of art.

Today, the very motive that draws a man to art — the quest for enjoyment — makes him run from it for his life. He runs to the gray, sunless, meaningless drudgery of his daily routine, with nothing to relieve it, nothing to expect or to enjoy. And he soon stops asking the tortured question: “Is there anything to see tonight? Is there anything to read?” Along with art, he gives up his vision of values and forgets that he had ever hoped to find or to achieve them.

He had looked for inspiration. Contempt and revulsion were not the only emotions he found, but also horror, indignation, and such a degree of boredom and loathing that anything is preferable to it — including the brutalizing emptiness of an existence devoid of any longing for values.

If you wonder what is wrong with people today, consider the fact that no laboratory experiment could ever reproduce so thorough a state of value-deprivation.

The consequences take many forms. Here is some of the evidence.

A survey in The New York Times (March 21, 1966) quotes some observers who estimate that forty to fifty percent of college students are drug addicts, then adds:

Actually, no one knows, even approximately, how many students take drugs. But everyone agrees that the number is rising, that it has been for several years and that no one is quite sure what to do about it. . . .

The drug takers are majoring in the humanities or social sciences, with more in English than any other subject. There are fewer consistent users in the sciences or in the professional schools. . . .

[The drug takers] are vaguely leftist, disenchanted with American policies in Vietnam, agitated because there are Negro ghettos and bored with conventional politics. They do not join the Peace Corps, which, a student at Penn State said, “is for Boy Scouts.”

Their fathers, more often than not, are professional men or white-collar executives. They are not deprived. A California psychiatrist says that the children of television writers in Hollywood use drugs more than any other group. . . .

The LSD users speak of dissolving the ego, meeting the naked self, finding a truly religious experience, and being so terribly honest with themselves that they know that all about them is sham. . . .

Why do they increasingly drop out of school and join the LSD cult, there to contemplate nature, induce periodic insanity, and pursue a philosophy that is a curious mélange of Zen, Aldous Huxley, existentialism, and leftover Orientalism? Dr. John D. Walmer, director of the mental health clinic at Penn State, suggests that “for people who are chronically unhappy drugs bring some relief from a world without purpose.” George H. Gaffney, deputy commissioner of narcotics, says students take drugs because “of the growing disrespect for authority, because some professors just don’t care to set any kind of moral influence and because of the growing beatnik influence.” Dr. Harvey Powleson, director of the psychiatric clinic at the Berkeley campus of the University of California, notes “a connection toward mystical movements in general.” . . .

A boy at San Francisco State may have spoken for his generation when he said he smoked marijuana and used LSD “because there is just no reason not to.” He was absolutely sure that this was so.

Who — in today’s culture — would have given him any valid reason to think otherwise?

Here is another aspect of the same phenomenon (The New York Times, December 29, 1964):

The number of adolescent suicides and suicide attempts is a source of alarm to an increasing number of educators, doctors, and parents. Princeton added a second full-time psychiatrist to its health services this fall; other schools are expanding existing services; at Columbia University the number of students seeking professional help has tripled in the last ten years. . . .

Surprisingly, Cornell doctors found that the student-patient who achieved the highest marks was the one most likely to do away with himself. Nonsuicidal students, on the other hand, were often doing poorly in their academic work. The bright students too often demanded far more of themselves than either their professors or the university.

Is it a matter of what the bright students demanded of themselves — or of life? A much more likely explanation is that the better the student, the more of today’s intellectual poison he had absorbed; being intelligent, he saw too clearly what sort of existence awaited him and, being too young to find an antidote, he could not stand the prospect.

When a culture is dedicated to the destruction of values — of all values, of values as such — men’s psychological destruction has to follow.

We hear it said that this is merely a period of transition, confusion, and growth, and that the leaders of today’s intellectual trends are groping for new values. But here is what makes their motives suspect. When the scientists of the Renaissance concluded that certain pseudo-sciences of the Middle Ages were invalid, they did not attempt to take them over and ride on their prestige; the chemists did not call themselves alchemists, the astronomers did not call themselves astrologers. But modern philosophers proclaim themselves to be philosophers while struggling to invalidate the essence of philosophy: the study of the fundamental, universal principles of existence. When men like Auguste Comte or Karl Marx decided to substitute society for God, they had the good grace not to call themselves theologians. When the esthetic innovators of the nineteenth century created a new literary form, they called it a “novel,” not an “anti-poem” — unlike the pretentious mediocrities of today who write “anti-novels.” When decorative artists began to design textiles and linoleums, they did not hang them up in frames on walls or entitle them “a representation of pure emotion.”

The exponents of modern movements do not seek to convert you to their values — they haven’t any — but to destroy yours. Nihilism and destruction are the almost explicit goals of today’s trends — and the horror is that these trends move on, unopposed.

Who is to blame? All those who are afraid to speak. All those who are still able to know better, but who are willing to temporize, to compromise, and thus to sanction an evil of that magnitude. All those intellectual leaders who are afraid to break with today’s culture, while knowing that it has rotted to the core — who are afraid to check, challenge, and reject its basic premises, while knowing that they are seeing the ultimate results — who are afraid to step out of the “mainstream,” while knowing that it is running with blood — who cringe, evade, and back away from the advance of screeching, bearded, drugged barbarians.

Now you may logically want to ask me the question: What is the solution and the antidote? But to this question, I have given an answer — at length — elsewhere. The answer lies outside today’s cultural “mainstream.” Its name is Objectivism.

Of Living Death

This essay was first published in the September – November 1968 issues of The Objectivist and later anthologized in The Voice of Reason: Essays in Objectivist Thought (1989).

It was also delivered in lecture form in December 1968 at Boston’s Ford Hall Forum and as a radio address. The lecture audio lasts 56 minutes, followed by a 55-minute Q&A.

Those who wish to observe the role of philosophy in human existence may see it dramatized on a grand (and gruesome) scale in the conflict splitting the Catholic church today.

Observe, in that conflict, men’s fear of identifying or challenging philosophical fundamentals: both sides are willing to fight in silent confusion, to stake their beliefs, their careers, their reputations on the outcome of a battle over the effects of an unnamed cause. One side is composed predominantly of men who dare not name the cause; the other, of men who dare not discover it.

Both sides claim to be puzzled and disappointed by what they regard as a contradiction in the two recent encyclicals of Pope Paul VI. The so-called conservatives (speaking in religious, not political, terms) were dismayed by the encyclical Populorum Progressio (On the Development of Peoples) — which advocated global statism — while the so-called liberals hailed it as a progressive document. Now the conservatives are hailing the encyclical Humanae Vitae (Of Human Life) — which forbids the use of contraceptives — while the liberals are dismayed by it. Both sides seem to find the two documents inconsistent. But the inconsistency is theirs, not the pontiff’s. The two encyclicals are strictly, flawlessly consistent in respect to their basic philosophy and ultimate goal: both come from the same view of man’s nature and are aimed at establishing the same conditions for his life on earth. The first of these two encyclicals forbade ambition, the second forbids enjoyment; the first enslaved man to the physical needs of others, the second enslaves him to the physical capacities of his own body; the first damned achievement, the second damns love.

The doctrine that man’s sexual capacity belongs to a lower or animal part of his nature has had a long history in the Catholic church. It is the necessary consequence of the doctrine that man is not an integrated entity, but a being torn apart by two opposite, antagonistic, irreconcilable elements: his body, which is of this earth, and his soul, which is of another, supernatural realm. According to that doctrine, man’s sexual capacity — regardless of how it is exercised or motivated, not merely its abuses, not unfastidious indulgence or promiscuity, but the capacity as such — is sinful or depraved.

For centuries, the dominant teaching of the church held that sexuality is evil, that only the need to avoid the extinction of the human species grants sex the status of a necessary evil and, therefore, only procreation can redeem or excuse it. In modern times, many Catholic writers have denied that such is the church’s view. But what is its view? They did not answer.

Let us see if we can find the answer in the encyclical Humanae Vitae.

Dealing with the subject of birth control, the encyclical prohibits all forms of contraception (except the so-called “rhythm method”). The prohibition is total, rigid, unequivocal. It is enunciated as a moral absolute.

Bear in mind what this subject entails. Try to hold an image of horror spread across space and time — across the entire globe and through all the centuries — the image of parents chained, like beasts of burden, to the physical needs of a growing brood of children — young parents aging prematurely while fighting a losing battle against starvation — the skeletal hordes of unwanted children born without a chance to live — the unwed mothers slaughtered in the unsanitary dens of incompetent abortionists — the silent terror hanging, for every couple, over every moment of love. If one holds this image while hearing that this nightmare is not to be stopped, the first question one will ask is: Why? In the name of humanity, one will assume that some inconceivable, but crucially important reason must motivate any human being who would seek to let that carnage go on uncontested.

So the first thing one will look for in the encyclical, is that reason, an answer to that Why?

“The problem of birth,” the encyclical declares, “like every other problem regarding human life, is to be considered . . . in the light of an integral vision of man and of his vocation, not only his natural and earthly, but also his supernatural and eternal, vocation.” [Paragraph 7]

And:

A reciprocal act of love, which jeopardizes the responsibility to transmit life which God the Creator, according to particular laws, inserted therein, is in contradiction with the design constitutive of marriage, and with the will of the author of life. To use this divine gift, destroying, even if only partially, its meaning and its purpose, is to contradict the nature both of man and of woman and of their most intimate relationship, and therefore it is to contradict also the plan of God and His will. [13]

And this is all. In the entire encyclical, this is the only reason given (but repeated over and over again) why men should transform their highest experience of happiness — their love — into a source of lifelong agony. Do so — the encyclical commands — because it is God’s will.

I, who do not believe in God, wonder why those who do would ascribe to him such a sadistic design, when God is supposed to be the archetype of mercy, kindness, and benevolence. What earthly goal is served by that doctrine? The answer runs like a hidden thread through the encyclical’s labyrinthian convolutions, repetitions, and exhortations.

In the darker corners of that labyrinth, one finds some snatches of argument, in alleged support of the mystic axiom, but these arguments are embarrassingly transparent equivocations. For instance:

. . . to make use of the gift of conjugal love while respecting the laws of the generative process means to acknowledge oneself not to be the arbiter of the sources of human life, but rather the minister of the design established by the Creator. In fact, just as man does not have unlimited dominion over his body in general, so also, with particular reason, he has no such dominion over his creative faculties as such, because of their intrinsic ordination toward raising up life, of which God is the principle. [13]

What is meant here by the words “man does not have unlimited dominion over his body in general?” The obvious meaning is that man cannot change the metaphysical nature of his body; which is true. But man has the power of choice in regard to the actions of his body — specifically, in regard to “his creative faculties,” and the responsibility for the use of these particular faculties is most crucially his. “To acknowledge oneself not to be the arbiter of the sources of human life” is to evade and to default on that responsibility. Here again, the same equivocation or package deal is involved. Does man have the power to determine the nature of his procreative faculty? No. But granted that nature, is he the arbiter of bringing a new human life into existence? He most certainly is, and he (with his mate) is the sole arbiter of that decision — and the consequences of that decision affect and determine the entire course of his life.

This is a clue to that paragraph’s intention: if man believed that so crucial a choice as procreation is not in his control, what would it do to his control over his life, his goals, his future?

The passive obedience and helpless surrender to the physical functions of one’s body, the necessity to let procreation be the inevitable result of the sexual act, is the natural fate of animals, not of men. In spite of its concern with man’s higher aspirations, with his soul, with the sanctity of married love — it is to the level of animals that the encyclical seeks to reduce man’s sex life, in fact, in reality, on earth. What does this indicate about the encyclical’s view of sex?

Anticipating certain obvious objections, the encyclical declares:

Now some may ask: In the present case, is it not reasonable in many circumstances to have recourse to artificial birth control if, thereby, was secure the harmony and peace of the family, and better conditions for the education of children already born? To this question it is necessary to reply with clarity: The church is the first to praise and recommend the intervention of intelligence in a function which so closely associates the rational creature with his Creator; but she affirms that this must be one with respect for the order established by God. [16]

To what does this subordinate man’s intelligence? If intelligence is forbidden to consider the fundamental problems of man’s existence, forbidden to alleviate his suffering, what does this indicate about the encyclical’s view of man — and of reason?

History can answer this particular question. History has seen a period of approximately ten centuries, known as the Dark and Middle Ages, when philosophy was regarded as “the handmaiden of theology,” and reason as the humble subordinate of faith. The results speak for themselves.

It must not be forgotten that the Catholic church has fought the advance of science since the Renaissance: from Galileo’s astronomy, to the dissection of corpses, which was the start of modern medicine, to the discovery of anesthesia in the nineteenth century, the greatest single discovery in respect to the incalculable amount of terrible suffering it has spared mankind. The Catholic church has fought medical progress by means of the same argument: that the application of knowledge to the relief of human suffering is an attempt to contradict God’s design. Specifically in regard to anesthesia during childbirth, the argument claimed that since God intended woman to suffer while giving birth, man has no right to intervene. (!)

The encyclical does not recommend unlimited procreation. It does not object to all means of birth control — only to those it calls “artificial” (i.e., scientific). It does not object to man “contradicting God’s will” nor to man being “the arbiter of the sources of human life,” provided he uses the means it endorses: abstinence.

Discussing the issue of “responsible parenthood,” the encyclical states: “In relation to physical, economic, psychological and social conditions, responsible parenthood is exercised, either by the deliberate and generous decision to raise a numerous family, or by the decision, made for grave motives and with due respect for the moral law, to avoid for the time being, or even for an indeterminate period, a new birth.” [10] To avoid — by what means? By abstaining from sexual intercourse.

The lines preceding that passage are: “In relation to the tendencies of instinct or passion, responsible parenthood means the necessary dominion which reason and will must exercise over them.” [10] How a man is to force his reason to obey an irrational injunction and what it would do to him psychologically, is not mentioned.

Further on, under the heading “Mastery of Self,” the encyclical declares:

To dominate instinct by means of one’s reason and free will undoubtedly requires ascetic practices . . . Yet this discipline which is proper to the purity of married couples, far from harming conjugal love, rather confers on it a higher human value. It demands continual effort yet, thanks to its beneficent influence, husband and wife fully develop their personalities, being enriched with spiritual values. . . . Such discipline . . . helps both parties to drive out selfishness, the enemy of true love; and deepens their sense of responsibility. [21]

If you can bear that style of expression being used to discuss such matters — which I find close to unbearable — and if you focus on the meaning, you will observe that the “discipline,” the “continual effort,” the “beneficent influence,” the “higher human value” refer to the torture of sexual frustration.

No, the encyclical does not say that sex as such is evil; it merely says that sexual abstinence in marriage is “a higher human value.” What does this indicate about the encyclical’s view of sex — and of marriage?

Its view of marriage is fairly explicit. “[Conjugal] love is first of all fully human, that is to say, of the senses and of the spirit at the same time. It is not, then, a simple transport of instinct and sentiment, but also, and principally, an act of the free will, intended to endure and to grow by means of the joys and sorrows of daily life, in such a way that husband and wife become one only heart and one only soul, and together attain their human perfection.

“Then this love is total; that is to say, it is a very special form of personal friendship, in which husband and wife generously share everything, without undue reservations or selfish calculations.” [9]

To classify the unique emotion of romantic love as a form of friendship is to obliterate it: the two emotional categories are mutually exclusive. The feeling of friendship is asexual; it can be experienced toward a member of one’s own sex.

There are many other indications of this kind scattered through the encyclical. For instance: “These acts, by which husband and wife are united in chaste intimacy and by means of which human life is transmitted, are, as the council recalled, ‘noble and worthy.’” [11] It is not chastity that one seeks in sex, and to describe it this way is to emasculate the meaning of marriage.

There are constant references to a married couple’s duties, which have to be considered in the context of the sexual act — “duties toward God, toward themselves, toward the family and toward society.”[10] If there is any one concept which, when associated with sex, would render a man impotent, it is the concept of “duty.”

To understand the full meaning of the encyclical’s view of sex, I shall ask you to identify the common denominator — the common intention — of the following quotations:

[The church’s] teaching, often set forth by the Magisterium, is founded upon the inseparable connection, willed by God and unable to be broken by man on his own initiative, between the two meanings of the conjugal act: the unitive meaning and the procreative meaning. Indeed, by its intimate structure, the conjugal act, while most closely uniting husband and wife, capacitates them for the generation of new lives. [12]

“[The conjugal acts] do not cease to be lawful if, for causes independent of the will of husband and wife, they are foreseen to be infecund.” [11, emphasis added.]

The church forbids: “every action which, either in anticipation of the conjugal act or its accomplishment, or in the development of its natural consequences, proposes, whether as an end or as a means, to render procreation impossible.” [14]

The church does not object to “an impediment to procreation” which might result from the medical treatment of a disease, “provided such impediment is not, for whatever motive, directly willed.” [15, emphasis added.]

And finally, the church “teaches that each and every marriage act (‘quilibet matrimonii usus,’) must remain open to the transmission of life.” [11]

What is the common denominator of these statements? It is not merely the tenet that sex as such is evil, but deeper: it is the commandment by means of which sex will become evil, the commandment which, if accepted, will divorce sex from love, will castrate man spiritually and will turn sex into a meaningless physical indulgence. That commandment is: man must not regard sex as an end in itself, but only as a means to an end.

Procreation and “God’s design” are not the major concern of that doctrine; they are merely primitive rationalizations to which man’s self-esteem is to be sacrificed. If it were otherwise, why the stressed insistence on forbidding man to impede procreation by his conscious will and choice? Why the tolerance of the conjugal acts of couples who are infecund by nature rather than by choice? What is so evil about that choice? There is only one answer: that choice rests on a couple’s conviction that the justification of sex is their own enjoyment. And this is the view which the church’s doctrine is intent on forbidding at any price.

That such is the doctrine’s intention, is supported by the church’s stand on the so-called “rhythm method” of birth control, which the encyclical approves and recommends.

The church is coherent with herself when she considers recourse to the infecund periods to be licit, while at the same time condemning, as being always illicit, the use of means directly contrary to fecundation, even if such use is inspired by reasons which may appear honest and serious. . . . It is true that, in the one and the other case, the married couple are concordant in the positive will of avoiding children for plausible reasons, seeking the certainty that offspring will not arrive; but it is also true that only in the former case are they able to renounce the use of marriage in the fecund periods when, for just motives, procreation is not desirable, while making use of it during infecund periods to manifest their affection and to safeguard their mutual fidelity. By so doing, they give proof of a truly and integrally honest love. [16]

 

On the face of it, this does not make any kind of sense at all — and the church has often been accused of hypocrisy or compromise because it permits this very unreliable method of birth control while forbidding all others. But examine that statement from the aspect of its intention, and you will see that the church is indeed “coherent with herself,” i.e., consistent.

What is the psychological difference between the “rhythm method” and other means of contraception? The difference lies in the fact that, using the “rhythm method,” a couple cannot regard sexual enjoyment as a right and as an end in itself. With the help of some hypocrisy, they merely sneak and snatch some personal pleasure, while keeping the marriage act “open to the transmission of life,” thus acknowledging that childbirth is the only moral justification of sex and that only by the grace of the calendar are they unable to comply.

This acknowledgment is the meaning of the encyclical’s peculiar implication that “to renounce the use of marriage in the fecund periods” is, somehow, a virtue (a renunciation which proper methods of birth control would not require). What else but this acknowledgment can be the meaning of the otherwise unintelligible statement that by the use of the “rhythm method” a couple “give proof of a truly and integrally honest love”?

There is a widespread popular notion to the effect that the Catholic church’s motive in opposing birth control is the desire to enlarge the Catholic population of the world. This may be superficially true of some people’s motives, but it is not the full truth. If it were, the Catholic church would forbid the “rhythm method” along with all other forms of contraception. And, more important, the Catholic church would not fight for anti-birth-control legislation all over the world: if numerical superiority were its motive, it would forbid birth control to its own followers and let it be available to other religious groups.

The motive of the church’s doctrine on this issue is, philosophically, much deeper than that and much worse; the goal is not metaphysical or political or biological, but psychological: if man is forbidden to regard sexual enjoyment as an end in itself, he will not regard love or his own happiness as an end in itself; if so, then he will not regard his own life as an end in itself; if so, then he will not attain self-esteem.

It is not against the gross, animal, physicalistic theories or uses of sex that the encyclical is directed, but against the spiritual meaning of sex in man’s life. (By “spiritual” I mean pertaining to man’s consciousness.) It is not directed against casual, mindless promiscuity, but against romantic love.

To make this clear, let me indicate, in brief essentials, a rational view of the role of sex in man’s existence.

Sex is a physical capacity, but its exercise is determined by man’s mind — by his choice of values, held consciously or subconsciously. To a rational man, sex is an expression of self-esteem — a celebration of himself and of existence. To the man who lacks self-esteem, sex is an attempt to fake it, to acquire its momentary illusion.

Romantic love, in the full sense of the term, is an emotion possible only to the man (or woman) of unbreached self-esteem: it is his response to his own highest values in the person of another — an integrated response of mind and body, of love and sexual desire. Such a man (or woman) is incapable of experiencing a sexual desire divorced from spiritual values.

I quote from Atlas Shrugged: “The men who think that wealth comes from material resources and has no intellectual root or meaning, are the men who think — for the same reason — that sex is a physical capacity which functions independently of one’s mind, choice or code of values. . . . But, in fact, a man’s sexual choice is the result and the sum of his fundamental convictions. . . . Sex is the most profoundly selfish of all acts, an act which [man] cannot perform for any motive but his own enjoyment — just try to think of performing it in a spirit of selfless charity! — an act which is not possible in self-abasement, only in self-exaltation, only in the confidence of being desired and being worthy of desire. . . . Love is our response to our highest values — and can be nothing else. . . . Only the man who extols the purity of a love devoid of desire, is capable of the depravity of a desire devoid of love.”

In other words, sexual promiscuity is to be condemned not because sex as such is evil, but because it is good — too good and too important to be treated casually.

In comparison to the moral and psychological importance of sexual happiness, the issue of procreation is insignificant and irrelevant, except as a deadly threat — and God bless the inventors of the Pill!

The capacity to procreate is merely a potential which man is not obligated to actualize. The choice to have children or not is morally optional. Nature endows man with a variety of potentials — and it is his mind that must decide which capacities he chooses to exercise, according to his own hierarchy of rational goals and values. The mere fact that man has the capacity to kill does not mean that it is his duty to become a murderer; in the same way, the mere fact that man has the capacity to procreate does not mean that it is his duty to commit spiritual suicide by making procreation his primary goal and turning himself into a stud-farm animal.

It is only animals that have to adapt themselves to their physical background and to the biological functions of their bodies. Man adapts his physical background and the use of his biological faculties to himself — to his own needs and values. That is his distinction from all other living species.

To an animal, the rearing of its young is a matter of temporary cycles. To man, it is a lifelong responsibility — a grave responsibility that must not be undertaken causelessly, thoughtlessly, or accidentally.

In regard to the moral aspects of birth control, the primary right involved is not the “right” of an unborn child, or of the family, or of society, or of God. The primary right is one which — in today’s public clamor on the subject — few, if any, voices have had the courage to uphold: the right of man and woman to their own life and happiness — the right not to be regarded as the means to any end.

Man is an end in himself. Romantic love — the profound, exalted, lifelong passion that unites his mind and body in the sexual act — is the living testimony to that principle.

This is what the encyclical seeks to destroy; or, more precisely, to obliterate, as if it does not and cannot exist.

Observe the encyclical’s contemptuous references to sexual desire as “instinct” or “passion,” as if “passion” were a pejorative term. Observe the false dichotomy offered; man’s choice is either mindless, “instinctual” copulation — or marriage, an institution presented not as a union of passionate love, but as a relationship of “chaste intimacy,” of “special personal friendship,” of “discipline proper to purity,” of unselfish duty, of alternating bouts with frustration and pregnancy, and of such unspeakable, Grade-B-movie-folks-next-door kind of boredom that any semi-living man would have to run, in self-preservation, to the nearest whorehouse.

No, I am not exaggerating. I have reserved — as my last piece of evidence on the question of the encyclical’s view of sex — the paragraph in which the coils and veils of euphemistic equivocation got torn, somehow, and the naked truth shows through.

It reads as follows:

Upright men can even better convince themselves of the solid grounds on which the teaching of the church in this field is based, if they care to reflect upon the consequences of methods of artificial birth control. Let them consider, first of all, how wide and easy a road would thus be opened up toward conjugal infidelity and the general lowering of morality. Not much experience is needed in order to know human weakness, and to understand that men — especially the young, who are so vulnerable on this point — have need of encouragement to be faithful to the moral law, so that they must not be offered some easy means of eluding its observance. It is also to be feared that the man, growing used to the employment of anticonceptive practices, may finally lose respect for the woman and, no longer caring for her physical and psychological equilibrium, may come to the point of considering her as a mere instrument of selfish enjoyment, and no longer as his respected and beloved companion. [17]

 

I cannot conceive of a rational woman who does not want to be precisely an instrument of her husband’s selfish enjoyment. I cannot conceive of what would have to be the mental state of a woman who could desire or accept the position of having a husband who does not derive any selfish enjoyment from sleeping with her. I cannot conceive of anyone, male or female, capable of believing that sexual enjoyment would destroy a husband’s love and respect for his wife — but regarding her as a brood mare and himself as a stud, would cause him to love and respect her.

Actually, this is too evil to discuss much further.

But we must also take note of the first part of that paragraph. It states that “artificial” contraception would open “a wide and easy road toward conjugal infidelity.” Such is the encyclical’s actual view of marriage: that marital fidelity rests on nothing better than fear of pregnancy. Well, “not much experience is needed in order to know” that that fear has never been much of a deterrent to anyone.

Now observe the inhuman cruelty of that paragraph’s reference to the young. Admitting that the young are “vulnerable on this point,” and declaring that they need “encouragement to be faithful to the moral law,” the encyclical forbids them the use of contraceptives, thus making it cold-bloodedly clear that its idea of moral encouragement consists of terror — the sheer, stark terror of young people caught between their first experience of love and the primitive brutality of the moral code of their elders. Surely the authors of the encyclical cannot be ignorant of the fact that it is not the young chasers or the teenage sluts who would be the victims of a ban on contraceptives, but the innocent young who risk their lives in the quest for love — the girl who finds herself pregnant and abandoned by her boyfriend, or the boy who is trapped into a premature, unwanted marriage. To ignore the agony of such victims — the countless suicides, the deaths at the hands of quack abortionists, the drained lives wasted under the double burden of a spurious “dishonor” and of an unwanted child — to ignore all that in the name of “the moral law” is to make a mockery of morality.

Another, and truly incredible mockery, leers at us from that same paragraph 17. As a warning against the use of contraceptives, the encyclical states:

Let it be considered also that a dangerous weapon would thus be placed in the hands of those public authorities who take no heed of moral exigencies. . . . Who will stop rulers from favoring, from even imposing upon their peoples, if they were to consider it necessary, the method of contraception which they judge to be most efficacious? In such a way men, wishing to avoid individual, family or social difficulties encountered in the observance of the divine law, would reach the point of placing at the mercy of the intervention of public authorities the most personal and most reserved sector of conjugal intimacy.

No public authorities have attempted — and no private groups have urged them to attempt — to force contraception on Catholics. But when one remembers that it is the Catholic church that has initiated anti-birth-control legislation the world over and thus has placed “at the mercy of the intervention of public authorities the most personal and most reserved sector of conjugal intimacy” — that statement becomes outrageous. Were it not for the politeness one should preserve toward the papal office, one would call that statement a brazen effrontery.

This leads us to the encyclical’s stand on the issue of abortion, and to another example of inhuman cruelty. Compare the coiling sentimentality of the encyclical’s style when it speaks of “conjugal love” to the clear, brusque, military tone of the following: “We must once again declare that the direct interruption of the generative process already begun, and, above all, directly willed and procured abortion, even if for therapeutic reasons, are to be absolutely excluded as licit means of regulating birth.” [14, emphasis added.]

After extolling the virtue and sanctity of motherhood, as a woman’s highest duty, as her “eternal vocation,” the encyclical attaches a special risk of death to the performance of that duty — an unnecessary death, in the presence of doctors forbidden to save her, as if a woman were only a screaming huddle of infected flesh who must not be permitted to imagine that she has the right to live.

And this policy is advocated by the encyclical’s supporters in the name of their concern for “the sanctity of life” and for “rights” — the rights of the embryo. (!)

I suppose that only the psychological mechanism of projection can make it possible for such advocates to accuse their opponents of being “anti-life.”

Observe that the men who uphold such a concept as “the rights of an embryo,” are the men who deny, negate, and violate the rights of a living human being.

An embryo has no rights. Rights do not pertain to a potential, only to an actual being. A child cannot acquire any rights until it is born. The living take precedence over the not yet living (or the unborn).

Abortion is a moral right — which should be left to the sole discretion of the woman involved; morally, nothing other than her wish in the matter is to be considered. Who can conceivably have the right to dictate to her what disposition she is to make of the functions of her own body? The Catholic church is responsible for this country’s disgracefully barbarian anti-abortion laws, which should be repealed and abolished.

The intensity of the importance that the Catholic church attaches to its doctrine on sex may be gauged by the enormity of the indifference to human suffering expressed in the encyclical. Its authors cannot be ignorant of the fact that man has to earn his living by his own effort, and that there is no couple on earth — on any level of income, in any country, civilized or not — who would be able to support the number of children they would produce if they obeyed the encyclical to the letter.

If we assume the richest couple and include time off for the periods of “purity,” it will still be true that the physical and psychological strain of their “vocation” would be so great that nothing much would be left of them, particularly of the mother, by the time they reached the age of forty.

Consider the position of an average American couple. What would be their life, if they succeeded in raising, say, twelve children, by working from morning till night, by running a desperate race with the periodic trips to maternity wards, with rent bills, grocery bills, clothing bills, pediatricians’ bills, strained-vegetables bills, school book bills, measles, mumps, whooping cough, Christmas trees, movies, ice cream cones, summer camps, party dresses, dates, draft cards, hospitals, colleges — with every salary raise of the industrious, hardworking father mortgaged and swallowed before it is received — what would they have gained at the end of their life except the hope that they might be able to pay their cemetery bills, in advance?

Now consider the position of the majority of mankind, who are barely able to subsist on a level of prehistorical poverty. No strain, no backbreaking effort of the ablest, most conscientious father can enable him properly to feed one child — let alone an open-end progression. The unspeakable misery of stunted, disease-eaten, chronically undernourished children, who die in droves before the age of ten, is a matter of public record. Pope Paul VI — who closes his encyclical by mentioning his title as earthly representative of “the God of holiness and mercy” — cannot be ignorant of these facts; yet he is able to ignore them.

The encyclical brushes this issue aside in a singularly irresponsible manner:

We are well aware of the serious difficulties experienced by public authorities in this regard, especially in the developing countries. To their legitimate preoccupations we devoted our encyclical letter Populorum Progressio. . . . The only possible solution to this question is one which envisages the social and economic progress both of individuals and of the whole of human society, and which respects and promotes true human values.

Neither can one, without grave injustice, consider Divine Providence to be responsible for what depends, instead, on a lack of wisdom in government, on an insufficient sense of social justice, on selfish monopolization or again on blameworthy indolence in confronting the efforts and the sacrifices necessary to insure the raising of living standards of a people and of all its sons. [23]

The encyclical Populorum Progressio advocated the abolition of capitalism and the establishment of a totalitarian, socialist-fascist, global state — in which the right to “the minimum essential for life” is to be the ruling principle and “all other rights whatsoever, including those of property and of free commerce, are to be subordinated to this principle.” (For a discussion of that encyclical, see my article “Requiem for Man” in [Capitalism: The Unknown Ideal].)

If, today, a struggling, desperate man, somewhere in Peru or China or Egypt or Nigeria, accepted the commandments of the present encyclical and strove to be moral, but saw his horde of children dying of hunger around him, the only practical advice the encyclical would give him is: Wait for the establishment of a collectivist world state. What, in God’s name, is he to do in the meantime?

Philosophically, however, the reference to the earlier encyclical, Populorum Progressio, is extremely significant: it is as if Pope Paul VI were pointing to the bridge between the two documents and to their common base.

The global state advocated in Populorum Progressio is a nightmare utopia where all are enslaved to the physical needs of all; its inhabitants are selfless robots, programmed by the tenets of altruism, without personal ambition, without mind, pride, or self-esteem. But self-esteem is a stubborn enemy of all utopias of that kind, and it is doubtful whether mere economic enslavement would destroy it wholly in men’s souls. What Populorum Progressio was intended to achieve from without, in regard to the physical conditions of man’s existence, Humanae Vitae is intended to achieve from within, in regard to the devastation of man’s consciousness.

“Don’t allow men to be happy,” said Ellsworth Toohey in The Fountainhead. “Happiness is self-contained and self-sufficient. . . . Happy men are free men. So kill their joy in living. . . . Make them feel that the mere fact of a personal desire is evil. . . . Unhappy men will come to you. They’ll need you. They’ll come for consolation, for support, for escape. Nature allows no vacuum. Empty man’s soul — and the space is yours to fill.”

Deprived of ambition, yet sentenced to endless toil; deprived of rewards, yet ordered to produce; deprived of sexual enjoyment, yet commanded to procreate; deprived of the right to live, yet forbidden to die — condemned to this state of living death, the graduates of the encyclical Humanae Vitae will be ready to move into the world of Populorum Progressio; they will have no other place to go.

“If some man like Hugh Akston,” said Hank Rearden in Atlas Shrugged, “had told me, when I started, that by accepting the mystics’ theory of sex I was accepting the looters’ theory of economics, I would have laughed in his face. I would not laugh at him now.”

It would be a mistake, however, to suppose that in the subconscious hierarchy of motives of the men who wrote these two encyclicals, the second, Humanae Vitae, was merely the spiritual means to the first, Populorum Progressio, which was the material end. The motives, I believe, were the reverse: Populorum Progressio was merely the material means to Humanae Vitae, which was the spiritual end.

“. . . with our predecessor Pope John XXIII,” says Pope Paul VI in Humanae Vitae, “we repeat: no solution to these difficulties is acceptable ‘which does violence to man’s essential dignity’ and is based only ‘on an utterly materialistic conception of man himself and of his life.’” [23, emphasis added.] They mean it — though not exactly in the way they would have us believe.

In terms of reality, nothing could be more materialistic than an existence devoted to feeding the whole world and procreating to the limit of one’s capacity. But when they say “materialistic,” they mean pertaining to man’s mind and to this earth; by “spiritual,’’ they mean whatever is anti-man, anti-mind, anti-life, and, above all, anti-possibility of human happiness on earth.

The ultimate goal of these encyclicals’ doctrine is not the material advantages to be gained by the rulers of a global slave state; the ultimate goal is the spiritual emasculation and degradation of man, the extinction of his love of life, which Humanae Vitae is intended to accomplish, and Populorum Progressio merely to embody and perpetuate.

The means of destroying man’s spirit is unearned guilt.

What I said in “Requiem for Man” about the motives of Populorum Progressio applies as fully to Humanae Vitae, with only a minor paraphrase pertaining to its subject. “But, you say, the encyclical’s ideal will not work? It is not intended to work. It is not intended to [achieve human chastity or sexual virtue]; it is intended to induce guilt. It is not intended to be accepted and practiced; it is intended to be accepted and broken — broken by man’s ‘selfish’ desire to [love], which will thus be turned into a shameful weakness. Men who accept as an ideal an irrational goal which they cannot achieve, never lift their heads thereafter — and never discover that their bowed heads were the only goal to be achieved.”

I said, in that article, that Populorum Progressio was produced by the sense of life not of an individual, but of an institution — whose driving power and dominant obsession is the desire to break man’s spirit. Today, I say it, with clearer evidence, about the encyclical Humanae Vitae.

This is the fundamental issue which neither side of the present conflict is willing fully to identify.

The conservatives or traditionalists of the Catholic church seem to know, no matter what rationalizations they propound, that such is the meaning and intention of their doctrine. The liberals seem to be more innocent, at least in this issue, and struggle not to have to face it. But they are the supporters of global statism and, in opposing Humanae Vitae, they are merely fighting the right battle for the wrong reasons. If they win, their social views will still lead them to the same ultimate results.

The rebellion of the victims, the Catholic laymen, has a touch of healthy self-assertiveness; however, if they defy the encyclical and continue to practice birth control, but regard it as a matter of their own weakness and guilt, the encyclical will have won: this is precisely what it was intended to accomplish.

The American bishops of the Catholic church, allegedly struggling to find a compromise, issued a pastoral letter declaring that contraception is an objective evil, but individuals are not necessarily guilty or sinful if they practice it — which amounts to a total abdication from the realm of morality and can lead men only to a deeper sense of guilt.

Such is the tragic futility of attempting to fight the existential consequences of a philosophical issue, without facing and challenging the philosophy that produced them.

This issue is not confined to the Catholic church, and it is deeper than the problem of contraception; it is a moral crisis approaching a climax. The core of the issue is Western civilization’s view of man and of his life. The essence of that view depends on the answer to two interrelated questions: Is man (man the individual) an end in himself? — and: Does man have the right to be happy on this earth?

Throughout its history, the West has been torn by a profound ambivalence on these questions: all of its achievements came from those periods when men acted as if the answer were “Yes” — but, with exceedingly rare exceptions, their spokesmen, the philosophers, kept proclaiming a thunderous “No,” in countless forms.

Neither an individual nor an entire civilization can exist indefinitely with an unresolved conflict of that kind. Our age is paying the penalty for it. And it is our age that will have to resolve it.