In a sense, all technology is biotechnology: machines interacting with human organisms. Technology is designed to overcome the frailties and limitations of human beings in a state of nature -- to make us faster, stronger, longer-lived, smarter, happier. And all technology raises questions about its real contribution to human welfare: are our lives really better for the existence of the automobile, television, nuclear power? These questions are ethical and political, as well as medical; and they even reach to the philosophical and spiritual. On the whole, we seem pretty well adapted to our technology, at least on the face of it -- but there have always been doubts about whether the human soul thrives best in the oppressively technological world we have created for ourselves. (I am continually struck by how much time I have to spend fixing the machines that supposedly improve my life.) 'Our Posthuman Future': Biotechnology
By COLIN McGINN
as a Threat to Human NatureOUR POSTHUMAN FUTURE
Consequences of the Biotechnology Revolution.
By Francis Fukuyama.
256 pp. New York: Farrar, Straus & Giroux.
$25.
Today, however, we are faced with a new phase in the powers of human technology; we are on the verge of discovering and implementing an alternative to evolution itself -- direct intervention in the genetic process. This is one of the main subjects of Francis Fukuyama's ''Our Posthuman Future,'' a timely, thoughtful and well-argued contribution to an important subject. Fukuyama, a professor of international political economy at Johns Hopkins University and a member of President Bush's Council on Bioethics, discusses cloning, germ-line genetic engineering, stem cell research, neuropharmacology, anti-aging medicine. His basic concern is the potential for violations of human nature that spring from the new biotechnology. Here science, politics and philosophy intersect, as we try to negotiate the prospect of designer babies, strapping nonagenarians, interspecies hybrids and the like. How dangerous is all this? Should it be regulated or stopped?
The book has three parts. First, we are given an outline of the major technological developments at issue, set against the backdrop of Orwell's ''Nineteen Eighty-four'' and Huxley's ''Brave New World.'' Summing up, Fukuyama writes: ''These developments will be hugely controversial because they will challenge dearly held notions of human equality and the capacity for moral choice; they will give societies new techniques for controlling the behavior of their citizens; they will change our understanding of human personality and identity; they will upend existing social hierarchies and affect the rate of intellectual, material and political progress; and they will affect the nature of global politics.'' The second part of the book endorses the idea that there is such a thing as a universal human nature, and suggests that this idea is central to any substantive conception of human rights. Aristotle emerges at this point as the man who got it right. The third part takes up more practical questions of how best to regulate the biotechnology industry.
Fukuyama spends a good deal of time discussing Prozac and Ritalin, and he argues convincingly that these drugs are heavily overprescribed, raising the specter of a Huxleyan dystopia of self-esteem in a bottle and sedated teenage zombies. Should we really rid life of its natural torments, its unruliness, its dark nights of the soul? These concerns are well taken, but I am not sure why Fukuyama includes the drug issue on his list and not other issues that also do not directly involve genetic tinkering and reproductive technology. What about the whole question of artificial intelligence and the enhancement of human abilities by means of neural implants? Information technology also raises serious questions for human well-being, as the electronic circuit gets closer to the neural circuit. And what about the expansion of plastic surgery or the use of steroids? These all flout the natural order, but since Fukuyama leaves them out of his discussion, his choice of topics is somewhat arbitrary.
The longest section of the book, and the most interesting, explores the question of whether there exists a human nature that these technologies can be said to violate. If human beings are infinitely plastic, with no fixed essence, then whatever we do to alter ourselves will not offend any preset natural order, and will not infringe the moral rights that supposedly flow from our nature. Fukuyama defines human nature in these words: ''human nature is the sum of the behavior and characteristics that are typical of the human species, arising from genetic rather than environmental factors.'' Later he homes in on what he calls our ''emotional gamut'' and suggests that this is what is most under threat from the biotechnologies he discusses: without ''human evils,'' there would be ''no sympathy, compassion, courage, heroism, solidarity or strength of character.'' These statements raise a number of questions that I do not think Fukuyama does enough to address.
First, where should we draw the line around the human evils that are good for us and those that are not? Surely medicine is dedicated to the reduction of suffering, and it would be absurd to limit this effort for fear that people will become more superficial. Are we to nix a cure for baldness because we think thinning hair is character-building? And what about serious genetic defects that may also call forth impressive human virtues?
Second, Fukuyama's conception of human nature seems to me flawed in two ways. He talks continually of what is distinctive to the human species, as if only this could ground a notion of human dignity; but there is no reason the characteristics that ground our rights should not be shared with members of other species, and obviously many of those characteristics are shared -- like the capacity to feel pain, to enjoy family relations, to need food and drink. This does not mean that we lack any qualities that distinguish us from other species; it is just that we don't need to be uniquely gifted with a certain feature in order for it to be part of our inherent nature.
But further, I think it is wrong to limit universal human nature to what is innately determined. Just because a characteristic is acquired is not by itself sufficient to show that it is not universal, since it may be brought about by a constant aspect of the human environment. Our common nature is a product of our shared genetic blueprint and the natural world in which we are all brought up. Of course, cultures vary in many respects, but there are also ways in which our environment is invariant -- just consider our common subjection to gravity, for example.
Third, it is not at all clear that we need to take a stand on the existence of an innate universal human nature in order to evaluate the coming technologies. Suppose for a moment that human beings have no such universal nature: we can still ask whether a particular innovation will be good for us. I actually agree with Fukuyama that a great deal of human nature is genetically based, but I don't see that this belief is necessary in order to have legitimate qualms about biotechnology. What is necessary is a set of views about what is valuable in human life -- and ought to be protected -- not a particular theory of what is innate and what acquired. The issue of nature versus nurture is really a red herring.
Where I do think Fukuyama is right is in his emphasis on an Aristotelian notion of human flourishing as a guide to public policy, rather than relying on the edict that we should maximize freedom of choice. There is no alternative to figuring out what allows human beings to prosper in deciding what policies to pursue. Freedom unconstrained by a substantive conception of that which makes life worthwhile is a recipe for meaninglessness. And this is where we need our philosophers. As Fukuyama rightly insists, the standard mix of utilitarianism and scientific materialism is not an adequate basis for evaluating the new technologies.
What sorts of regulatory bodies are needed to control biotechnology? Fukuyama suggests, plausibly enough, that the decisions cannot be left to the scientists and captains of industry, because they are obviously heavily invested in the technology they are creating. What we require are governmental institutions that oversee industry. Above all, we need to be thinking about all this now, not when the streets are crowded with mutants and supermen and we wonder where we went wrong. ''Our Posthuman Future'' takes on these issues with the kind of philosophical and political scope that they urgently require. After all, we are dealing here with nothing less than the Nature of Man.
Colin McGinn is a professor of philosophy at Rutgers University. His most recent book is ''The Making of a Philosopher: My Journey Through Twentieth Century Philosophy.''
Refs
and further readingHOME
Resources
BLTC Research
Liberal Eugenics
Superhappiness?
Utopian Surgery?
Francis Fukuyama
The End of Suffering
Wirehead Hedonism
The Good Drug Guide
The Abolitionist Project
The Hedonistic Imperative
The Reproductive Revolution
MDMA: Utopian Pharmacology
Transhumanism: Brave New World?
Critique of Aldous Huxley's Brave New World
dave@bltc.com