Join

Digital and genetic techniques increasingly influence life. Our belief in progress through technology stands in the way of a moral debate on this development.


By Rinie van Est


We keep a close watch on what voters and members of parliament want, but the future of our society is determined by something else: technological development. At least, that’s what thinkers such as Dominique Janicaud believe, who wrote: ‘Technological power is more revolutionary than any revolution; it comes from above, no one can know where it is going’. In views such as these, the role of politics is limited to properly spreading technological innovations. I do not agree with this. Without trying to undermine the revolutionary force of technology, I do think politics is capable of a democratic steering of technology to a certain degree. In fact, I believe that interaction between the political domain and the techno-economic domain is the essence of our democracy. But here, politics is neglectful, because it has a blind spot for the ideological role that technology plays in our society.


Technology has become part of our nature. It has made us who we are and continues to do so. As the essence of technology changes, its effect on mankind changes. The first technological revolution (skipping hunter-gatherer tools) was the agricultural revolution, which was characterised by an increased impact on the landscape. Humans, once animals among animals, started considering themselves as chosen rulers over nature; see Genesis. And while they used nature, tamed it, and influenced it, they did not see it as wholly manipulable. An occasional famine puts mankind in its place.


The agrarian society, which existed thousands of years, revolved around farmland. Owning farmland was the first and foremost source of conflicts; it was the primary techno-political arena back then. And land becoming infertile was the worst-case scenario by far, leading farmers to keep looking for methods of working the land in a way that we now call ecologically sustainable: with various fertilisation methods, crop rotation, and so on.


In the seventeenth century, a new view unfolded of how mankind must deal with inanimate and living nature. The idea that God is not only manifested in the Bible, but also in the book of nature, was of crucial influence. This centuries-old notion got a new meaning when the idea took root that nature, in essence, is a purposeless mechanism that can be analysed and understood; a thought linked to the name of Descartes. It was seen as a religious task to map natural laws, or the divine order and structure.


The idea took root that nature, in essence, is a purposeless mechanism that can be analysed and understood.


What has been mapped can be set foot on and claimed. A belief in the explicability of nature comes with a gradual belief in its manipulability. An early exponent of this was Francis Bacon, who was convinced that scientific and technological progress contributed to societal and moral progress. He advocated instrumental use of not only inanimate, but also living nature. People in his well-known utopia New Atlantis (1624) used telecommunication methods, and they flew like birds. They also bred poultry with sizes that make our broilers look like bantams.


If Bacon was the prophet, James Watt was the savior. At least, having been a well-educated engineer and entrepreneur, he now symbolises like no other the close collaboration between three previously divided sectors: science, technology, and business. This trinity was to literally give steam to the industrial revolution, and transformed the world as of the late eighteenth century, beginning in England.


Man has increasing control over inanimate nature in particular, such as fossil fuels, machines, and tools, thanks to science. The scientific description and explanation of the current world is surpassed by the creation of a new one: novel ideas are being explored scientifically, consequent insights are being applied and optimized technologically, and their results are being exploited economically. The list of innovations is endless, from railroads, telegraphy, and electricity, to artificial fertiliser, reinforced concrete, and Bakelite.


Where the agrarian society knew a constant struggle for land, the techno-political arena of the industrial era saw labour give rise to conflict: capital strove to get the most out of labour, and labour resisted this intolerable exploitation. In the same way that ecological sustainability (to be sure, before the term existed) became the leading principle in the agrarian society, social justice became the leading principle for its industrial successor. For that matter, both principles are still relevant today.


The Second World War was a double milestone. On the one hand, Auschwitz and Hiroshima marked the beginning of doubt concerning the blessings of technology; on the other hand, that same period saw radical development in the notion of manipulability, and thus finally, the third revolution in our series. In the 1950s already, Hannah Arendt noticed that living nature as well as humanity itself have become subject to manipulability. Where in the past mineral extraction, fertile ground, transport, and the production of goods were subject to improvement, now matters such as our personality, procreation, physical achievements, social interaction, and memory, also appear to have the potential to be bettered. Research fields such as genetics, neurology, pharmacology, medical technology, and information and communication technology, all contribute to the project ‘Control over Human Nature’.


Of course there is a massive gap between living and inanimate nature, but engineers consider a gap merely as something to be bridged. This bridge was made by mathematician Norbert Wiener, when he succeeded in giving organisms and mechanisms a common denominator: that of information-generating systems based on feedback loops. With this Wiener laid the foundations for cybernetics, in which living and inanimate nature are identical and controllable, in essence. This turned out to be the basis for the information revolution that we experience today.


Wiener laid the foundations for cybernetics, in which living and inanimate nature are identical and controllable.


Wiener’s insight cleared the way to also industrialising our biological, mental, and social lives, as technological philosopher Bernard Stiegler remarked, and he wouldn’t have been a French thinker if he did not coin a term such as hyperindustrialisation. More interesting, perhaps, is the observation that the above domains are thereby exposed to typically industrial phenomena such as rationalisation, economisation, planning, and the imperative of efficiency.


Now IT and nanotechnology are taking hold in biotechnology and neurosciences, it can be expected that in the coming decades, all sorts of biological, cognitive, and social processes will become increasingly digitised. This outlines two technological megatrends: biology is increasingly becoming technology and vice versa. The first trend implies that living systems such as the human body and brain are increasingly seen to be made up of building blocks: we will be able to intervene with the body in the same way that we fiddle with mechanical items. The second trend implies that machines and technological systems are becoming increasingly lifelike: they will get organic properties such as the ability to heal and procreate, and intelligence. Both developments are still in their infancy, but they can be found in various fields. The warm organism and the cold mechanism are growing towards one another.


Biology is increasingly becoming technology and vice versa.


Following the trend of the agrarian society and the industrial revolution, the information revolution also creates its own techno-political arena and a new worst-case scenario, and it calls for a new leading principle. This time, the cause of the central conflict is living nature: animals and plants, and especially humans. The fact that we ourselves are the resources – our bodies, our genes, our brains, even our attention, and our social world – raises numerous ethical questions in biological and cognitive fields. In the worst case, the resource ‘human’ will be exploited and depleted, both physically and mentally, in the same way that reckless farmers deplete fertile soil or a heartless capitalist exploits a labourer. A crucial question is: what should be the leading principle in our efforts to avoid this terrible scenario? Two factors that could be a tentative key to the answer, however vague, could be: human dignity (for example, the right to respect, privacy, and physical and mental integrity) and human sustainability (the right to personal uniqueness: what aspects of humans and humanness are seen as manipulable, and what aspects would we like to keep?).


What will the bio-political arena look like, inasmuch as a sneak preview of the future is at all possible? Up until now, especially gene-technological innovations know some heated debates: pesticide resistant maize, rice with extra vitamins, bacteria that produce insulin, farm animals named Dolly or Herman. Meanwhile, we’re in the first stages of synthetic biology, where micro-organisms are seen as chemical factories waiting to be programmed. Today it is possible to rebuild every virus of which the genome is known, and in 2010 the Craig Venter Institute even succeeded in doing this with a bacterium. These developments raise questions in fields such as safety, patents, and the technological manipulability of life.


Where these developments previously only affected other species, since 2001 we humans have also been sucked into the gene debate. In that year, the National Science Foundation, the American financer for scientific research, linked the following two fields: on the one hand, developments in cognitive sciences and in the field of nano-, bio-, and information technology (the so-called NBIC convergence), and on the other hand, the older dream of the so-called transhumanists, who want to utilise technology to create faster, stronger, smarter, and since we’re at it, immortal humans. Many technologies can be employed for these human enhancements, such as regenerative medicine, gene doping, concentration-enhancing drugs, bionic limbs, and direct stimulation of the brain. Deep brain stimulation, for example, which is now intended to counteract severe trembling in Parkinson patients, could also be used to suppress depression, and thereby regulate our character. The latter raises essential questions on the manipulability of the brain, mental integrity, and privacy, and the borders of the informed consent principle.


A third element that must be included in the sneak preview is the emergence of intelligent and even emotional machines. We are already familiar with simple varieties, such as a talking softbot on the IKEA website and computer programs that advertise to us based on our interests. Much more questionable, however, are the remote controlled, unmanned, armed airplanes, drones. Their pilots are behind computer screens in Nevada, at great distance, both geographically and morally, from the Afghans, Yemenites, and others whom they kill with the click of a mouse. As one of these pilots says in the book Wired for War by political scientist P.W. Singer, ‘It’s like a video game. It can get a little bloodthirsty. But it’s fucking cool’. Another technology may avoid such indifference: some computer games have actually been designed to promote empathy with fictional characters. Thus, such persuasive technology can influence behavior.


We ourselves are the resources – our bodies, our genes, our brains, even our attention.


The digital modification of our social life and genetic modification of life will become increasingly important biopolitical subjects. But where is the debate? Of course, it can take some time before a developing technology becomes societally visible and thus discussable. But this is not all. For a long time, roughly up to the Second World War, such debate was seen as superfluous. Both left and right wing parties considered technology a means for realizing political ideals and stimulate economic growth, which was another word for progress. The government’s role was limited to stimulating these developments and controlling possible negative side effects afterwards.


As previously said, this blind trust was scathed in the Second World War. Since then, there has been a need for leading democratic debate in an early stage on what laboratories have up their sleeves, although it took some time before this need was met institutionally. Since 1970, the United States requires large technological projects to produce a mandatory environmental impact report. Two years later, the Congress got its own Office of Technology Assessment, which studies the societal effects of new technologies in the hope that politics can anticipate them in a timely manner. In the years that follow, European countries, too, started their own organisations for Technology Assessment (TA), such as my own employer, the Rathenau Instituut. Nevertheless, the dominant view remained that of modernism, also in TA practice: that technology is the means towards economic growth and solving societal problems, where possible drawbacks can be taken away through policy interventions.


But is politics really capable of timely anticipation? This is disputable. First, there is the so-called Collingridge dilemma: at an early stage of technology development, the effects cannot be predicted, and by the time the effects become visible, they’re out of control. Related to this is the observation that social reality is stubborn, and that new technologies often have unintentional and unexpected, sometimes paradoxical effects.


At an early stage of technology development, the effects cannot be predicted, and by the time the effects become visible, they’re out of control.


In their recent book The Techno-Human Condition, Braden Allenby and Daniel Sarewitz dig a little deeper and claim that control over technology is ultimately impossible. To illustrate this, they distinguish three levels of impact. On level I, technology yields instant progress: a car will bring you from A to B faster than a bicycle. However, then the phenomenon of system complexity strikes. On level II, new technologies and all sorts of existing technological and social systems affect each other, leading results to become unpredictable. Take again the example of the car. When cars were introduced in the Netherlands, men with red flags would walk ahead, and nobody could foresee phenomena such as motorways, commuting traffic, and the marriage between liberalism and 130 km/h speed limits. Finally, on level III, some technologies contribute to nothing less than transformations of the world the way we know it. Cars have played a big part in numerous changes in economy, ecology, politics, and culture. For those who are doubtful, think of mass production, climate change, the Middle East and individual freedom of movement.


What counts for cars will undoubtedly count even stronger for a number of the previously mentioned emerging technologies. After all, a car is but a vehicle, while the new wave of innovations can change us as human beings. Level I discussions on this are hardly useful; levels II and III must be acknowledged and, if possible, explored, bearing in mind that developments on level III are relatively autonomous. When undesirable characteristics have surfaced on level III, we sooner consider them as unattractive aspects of the world the way it is (much like menopause and petrol stations) rather than problems that we can get rid of.


We live in the ideology of achievement. Everything is manipulable.


Where Allenby and Sarewitz’s book is recent (2011), a century ago Heidegger showed that those who view technology in an instrumental way would overlook its essence. He, too, claimed that the technology determines societal dynamics to a large extent. He feared that we would become so enchanted by technology that what he called ‘calculative’ thinking would become the only kind of thinking. What Heidegger feared has come to pass, according to technology philosopher Stiegler: we live in the ideology of achievement. Everything is manipulable.


The enlightenment belief in progress through technology still prevails. Believers still claim that mankind is in charge, so as long as we are vigilant, we will not have to worry. There are good reasons for not sharing such reckless self-confidence. Innovations change the future in whimsical ways, politics has less grip on this than we believe, and we are so enchanted by technology that we can hardly resist it. Where we believe in progress through technology, driven by political ideals and debate, in fact it is rather a matter of a soulless development through technology, driven by the belief in technological manipulability. Meanwhile, the confident belief in progress through technology stands in the way of a proper political debate on the societal role of technology.


The information revolution has great need for such a debate, in which human values and visions of a good life play a central role. After all, the industrialisation of our bodies, our minds, and the social domain is at stake, with all the ethical and political issues that ensue. Continuous anticipation of separate technologies (TA at level I) is not sufficient, because the broader context is not taken into account. The broader technological trend and the technical ideal of makeable life must be made into political and societal issues and we must explore, to the best of our abilities, how they may transform society.


Our humanness is too important to surrender to the forces of technology and economy.


Moral shortcomings in the current technology debate are shown best by the total absence of criteria for orientating ourselves in the technoscientific future that will inevitably be thrust upon us, again citing Stiegler. He means: we don’t know what we want. What we want is perhaps the most important question facing us in the 21st century, because its answer touches on our fundamental values and moral codes, and it could lead, or not, to a worldwide intensification of religious and moral conflicts. Therefore there is a need for political discussion on moral principles that must give shape to the new technological wave. If we withhold from this due to the progress belief, it is at our own risk. In that case history threatens to repeat itself. The social issue was first taken seriously in the first half of the twentieth century, the ecological issue even later (farmers made the exception to this, as their survival depended on fertile ground). Suppose that citizens and politicians at the beginning of the industrial revolution had made social justice and ecological sustainability leading principles, what would the world look like today? Our humanness is too important to surrender to the forces of technology and economy, because were we to do that, technological manipulability will become the (political) guiding principle. Let’s therefore step down from a naïve belief in progress and break the big silence on how the information revolution is changing us. So that we can begin searching for common moral principles that give a dignified political direction to these changes.


***


This article is an adaptation of the chapter ‘The ideological emptiness of technology debate: the big silence on how the information revolution changes us’, which appeared in 2012 in the book Stille ideologie: Onderstromen in beleid en bestuur (translation: Silent Ideology: Undercurrents in Policy and Management), edited by Cor van Montfort, Ank Michels and Wouter van Doorn, and published by Boom/Lemma.


The Dutch version of this essay was published on October 17 2012 in De Groene Amsterdammer (pp. 26-29) under the title Wat hebben de laboratoria voor ons in petto? Het morele tekort van het techniekdebat (translation: Laboratories: what do they have up their sleeves? Moral shortcomings in technology debate).

Enjoying this story? Show it to us!

0 Likes

Share your thoughts and join the technology debate!

Be the first to comment