© Copyright 1997 Princeton University Press

All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World-Wide Web.

Chapter 12: Invisible Idiots

[¶1.]

Introduction

[¶2.] The recent literature on the growing use of computers has drawn many critical studies of the relationship between the individual and the newly reconstructed society. Sherry Turkle has studied the transformative effect on human personality and culture by immersion in the new world of interactive computer technology.1 Howard Rheingold has not only extolled the prospects of the new virtual community, but warned of the darker implications of immersion in virtual trivialities and the prospect of intrusive monitoring and the loss of privacy.2 And Jeremy Rifkin has extended the traditional arguments over labor and the loss of jobs and skills into concern that we are entering a postmarket world where all labor, at all levels, that is not directly concerned with manufacturing and supporting the computerized networks will become first devalued and then displaced.3

[¶3.] My specific concern, however, has not been the direct impact on the individual. What motivated the research that led to this book was that neither the significance nor the immediacy of the use of computers in business, the military, and other large organizations was receiving the same degree of attention. Both computerization of operation and control and the growth in the scope and capabilities of computerized networks are rapidly altering the nature, approach, and goals of large organizations, often transforming them into structures and webs that cannot be understood, let alone managed and operated, by unaided human beings.

[¶4.] From time to time there issues from the computer community some pronouncement about the increased capability of computers to enhance or emulate some realm of human activity that is meant to both startle and impress. Most typically, the ones that receive the greatest critical attention are not those that make particularly effective use of the mathematical, memory, or networking power of computers, but those that seemingly intrude into those realms that are uniquely human. Progress in artificial intelligence is one such, robotics another. The role of computers in creating and maintaining elaborate webs of differentiated activities, such as the coordination and scheduling of continent-wide organ transplants, or the managing of complex inventories, is at best relegated to the science journals or the business news, with almost no forethought about the long-term and systemic implications of the increased dependencies and vulnerabilities that are thereby created.

[¶5.] To some extent, this is because such long-term and structural effects are often diffuse and indirect. Consider, for example, the increasingly obvious statement that by some future year (say 2050), there will exist computer chess-playing programs that no human being will be able to beat. Many in the technical community seem to find such statements very profound, to represent a real and direct intrusion into the realm of human capabilities. Those of us who are more immersed in the social world of human responses and human interaction often find this puzzling, and respond simply: So what? What are the consequences? What are the implications? The puzzles of the world in which we live arise not only from individual behavior and capabilities, but from more complex, socially constructed webs of action, reaction, and interaction that make up the socio-political environment. What matters is not whether computers win, but how that will affect human perceptions of their role and place in the world.

[¶6.] As computerization of social and business organizations continues apace, however, the temptation to apply the "so what" factor is increasingly tempered with caution. Computers, networks, and the automation and interconnection they foster are no longer just the socially and technically constructed outputs of human activities; they have become part of the socio-economic-technical construction kit from which future societies will be assembled. If those who are designing, programming, installing, and implementing computerized systems for business, industry, finance, or the military show no greater appreciation for the relationships among and differences between personal, social, and technical objectives than they have showed in their pursuit of the computerized Grand Master, prospects for the future are alarming indeed.

[¶7.]

Standardization and Slack

[¶8.] Gurus and acolytes of the information revolution argue that unlimited desktop power linked through global networks will enrich and enhance human life by providing individuals, even (or perhaps increasingly) isolated ones, with access to enormous bodies of information in unimaginable variety. But much of what passes for distributed information is eclectic, idiosyncratic, frequently disorganized, and often unreliable. To a great extent, it is large organizations that create, maintain, provide, and sometimes control the repositories of systematic, organized information, and, in large organizations, efficiency and standardization have a logic of their own. Whatever the faults of the "modern" hierarchical organization, those far down the chain from the top were always able to find enough leeway to create individual, differentiated means of operating within specified constraints. What the web of interconnection makes possible, and therefore makes probable, is a push for "efficiency," for standardization and compliance at an unprecedented level of detail.

[¶9.] Imagine a world in which every cook in every restaurant has an expert assistant to ensure that every dish prepared reliably tastes the same, anywhere you order it. Boring, perhaps, but efficient.4 What about integrated global markets that are open only to the largest and most powerful global corporations? More than just probable. What about a world in which every supermarket not only tracks check-outs for managing inventory, but uses the information to change deliveries to stores to ensure that valuable shelf space is not taken up by items that move slowly, or erratically? Very real indeed, as the success of Wal-Mart will attest.5 If present trends continue, the same means of managing space may soon also be applied to newly computerized libraries, and then perhaps to other overcrowded databases, turning them into Wal-Marts of the mind.

[¶10.] What about a fully re-engineered, computer-linked industry in which parts inventories and buffer stocks are held to a bare minimum, with a web of communications linking specialized shops all over the globe so that an order for any complicated good is turned into a network of specialized orders, calculated, coordinated, and integrated by a web of interconnected computers? Futuristic? Not really, for that is already the goal of many, based on the existing practice of just-in-time manufacturing.

[¶11.] In the longer term, however, what is more troubling is not the transformation of organizational structures, but the intrusion of automation into organizational management and operations, much of it promoted by technical experts who seem to view the interactive social world as an exercise in game theory--not chess, perhaps, but, however elaborate and complex, subject in principle to more efficient, effective, and rapid play through the intervention of computers and computer networks. Good management and operation is an art. All a computer can do is to efficiently manipulate a predesigned mathematical representation of a social activity according to a predetermined logic--as long as both rules and model are fixed.6 In short, convert art to artifice.

[¶12.] In many realms of human activity, computers are now performing with precision and predictability many tasks that human limitations prevent human beings from doing as exactly, or as reliably. What is lost in many cases is not just variety, and specific human skills, but the capacity to nurture, enhance, and expand them through the messy processes of direct, trial-and-error learning. Computerization and automation are also wonderful promoters of the icon of technical efficiency, as opposed to the duplicative and often haphazard maintenance of sufficient extra resources to control or mitigate the effects of human mistakes. Of particular concern is the degree to which what is destroyed or discarded in the relentless pursuit of technical and operational efficiency is not waste or slop, but "slack," the human and material buffering capacity that allows organizations and social systems to absorb unpredicted, and often unpredictable, shocks.

[¶13.] The social costs in many of these cases may seem moderately but not critically serious. In the short term, that may be true. To those of us who study indirect and collective effects on human behavior and long-term trends in human interaction, however, the potential effects on the future structures, social formations, and organizations of human societies, the potential losses of social means for learning, social capacity for adaptation, and social space for innovation and creativity that are pointed at, if not actually represented by such cases, are more than a little disturbing.

[¶14.]

Virtual Organizations in a Real World

[¶15.] Another recent development that has drawn considerable interest is virtual reality (VR). At first, the idea was confined primarily to the individual user. The primitive sensory-tactile "VR" sets in amusement arcades are only the first step to a promised world of virtual reality in which human beings can climb mountains, or walk on the moon, or duel, or make love, without risk and without physical harm. But transcending the body is not what is attracting commercial attention. To many, the goal of virtual space, like the goal of computer chess, is to use computers to go beyond the capacity of real, physical human beings, and, eventually, to move beyond simple representations of human space and into a wide variety of larger-scale social activities. The Internet and its future successors are already being promoted by their developers and enthusiasts as opening the door to the creation of extended virtual communities in which people can interact without the time-consuming and frequently messy processes of physical movement and physical intimacy.

[¶16.] If the many journals and articles written and read by the prophets and promoters of organizational computing are to be believed, the ultimate goal is to blend the presumed interpretive and logical skills designed into gaming machines such as computerized chess programs with the architecture and connectivity of the Internet, and then supply it with a variety of tactile and direct interfaces--not only to surpass but to transcend both the analytic and physical limitations of human beings. What most fail to realize is that in so doing, they will be deconstructing the historically socially constructed worlds of business, and of the military, transforming them into computer-constructed virtual spaces in which automated, computerized systems interpret human wants and needs and translate them into reality via adaptable automatic machinery.

[¶17.] The virtual factory will be able to take a customer order and turn out a customized automobile, or couch, within days, if not hours, using automated machinery and just-in-time delivery systems. The virtual corporation will be able to present to the world whatever face is contingently the most appropriate, adapting and configuring internally via computers and computerized networks from, perhaps, an oil company on Monday to a fertilizer company on Tuesday.7 The virtual military will be able to throw together almost immediately the force mix appropriate to any crisis or need, set up a command-and-control structure, and apply force intelligently and rationally in pursuit of specific goals and objectives.8 And if markets can be made any more virtual than they are already, they might perhaps trade in purely conceptual instruments that relate only to what people (and other computers) think might happen, or should happen, making and losing money as a pure computer game instead of being tied to the slow and cumbersome processes of real economic activity.9

[¶18.] Many critics have pointed out that the uncalculated risk in all of this is the risk of failure, either direct and mechanical failure of the machine through error or accident, or software failures owing either to limitations of the program or errors in programming. In cases such as nuclear plant operations, a few of the more farsighted have even pointed out the dangers of believing that human programmers, or any electronic analogue they concoct, can anticipate all contingencies or provide for all circumstances, and worry about how the computer, or the computerized controls, will respond to a circumstance that announces time and safety urgency but provides no input that quite fits the set of programmed responses or analytic techniques.

[¶19.] The case of glass cockpits raised another linked, but less direct concern about the possible loss of expertise that will occur if the computerization and automation of complex and risky systems not only prevents humans from gaining expertise, but interferes with their learning how to respond at all without computer aid. This too is the replacement of art with artifice, but in this case a very consequential one. Pilots and other operators live in a world of irreducible uncertainty, of "unknown unknowns," where they must deal with what was not known not to be known.10 No computer can be programmed to deal with contingencies that cannot be anticipated. Coping with such "rogue" outcomes is in fact the art of the superb manager or master operator.11 But who will master the virtual world of virtual corporations operating over virtual links? Indeed, who will operate and manage them?

[¶20.] Once, many years ago, as a student at the University of Chicago, I followed on a real, physical board a duel between two blindfolded chess masters--a game of real chess being played in virtual space. That was, and is, a tremendous display of human beings working beyond their normal limitations. What is now happening in the social and economic world as the widespread computerization of communication, information, management, and control pushes organizations and systems beyond human limits seems instead to be a virtual game being played with real pieces. The difference matters.

[¶21.]

Conclusion

[¶22.] The long-standing search for an automated chess player was an understandable pursuit of the computer community, and perhaps a nearly perfect one for them. Chess is, after all, a highly structured, artificial game with finite rules and finite possibilities. It is more challenging to humans than most formal games because there are so many possibilities, and the strategies are so complex, by human standards, that proficiency (let alone expertise) is not easily gained. What is more, skill at chess is highly valued in academic and professional communities, such as those from which the designers of computer chess players tend to be drawn.

[¶23.] The design of a computer chess player that could defeat a human master was once an impressive accomplishment because of limitations on the size, speed, and algorithmic sophistication of the machines. But as a demonstration of machine intelligence, it is more mechanical than intelligent, its "intelligence" of relevance only in a universe where rules are finite and fixed, behavior bounded, and causal chains deterministic and simply linked. Sooner or later, the machines will be sufficiently large, and sufficiently fast, that they will be able to find brute force solutions to all possible chess moves, even without simplifying algorithms. At that point, no one will be able to beat them (nor will they be able to beat each other).

[¶24.] For those who find delight in the computer's ability to master skills once held by only a few human beings, that will be a meaningful, perhaps even an important achievement. For human beings who play chess with other human beings, for enjoyment, as a test of comparative cognitive mastery, that will be, or should be, of no importance whatsoever. Computers cannot really "play" chess at all, for they have no knowledge of what chess is, to a human, and no conception of why humans play it.12 Chess as a human activity will remain an exploration of the ability to recognize patterns in the face of near limitless possibilities, and to learn how to act into an uncertain future not recklessly but with forethought and insight. But for others, the erosion of context, the knowledge that there is a machine that can play a better game from any position and make a better move in any situation, will strip the game of importance and meaning.

[¶25.] That people possessing operational skills corresponding to these are now threatened, even if only indirectly and unintentionally, by the new modes of automation is an immediate concern, but not a long-term one. The long-term consequences will arise from the web of secondary effects, ranging from the loss of the basis from which such skills can be constructed to the creation of a socio-technical dependency on operating at such high levels of efficiency and complexity that human beings will not be able to manage or control the effects, intended or unintended, whether the computers break down or not.

[¶26.] What I have called the "computer trap" is a shorthand for the elaborate, long-term, collective effects of the possibly irreversible and largely unexamined drive to computerize and network everything and anything whose efficiency or economic performance might thereby be improved. In the process, those who re-design and re-engineer the large-scale social and socio-technical systems that are essential for managing the complexities and structures of modern life seem to have little understanding of the potential vulnerabilities they are creating. Such effects are already being noted in the similar, persistent search to eliminate from hazardous systems all possible sources and causes of "human error." Whether those systems be military or industrial, financial or bureaucratic, the increased tightness of coupling, lack of redundancy, and speed of response, will make human intervention or control difficult at best when (and not if) something goes wrong--particularly for those systems whose means and mechanisms of operation are so deeply embedded in the computers that operate them that no human being fully understands them.

[¶27.] There is an apocryphal story about early efforts to construct intelligent machines for language translation. Having received a new program to intertranslate English and Russian, the operators decided to check the system by translating phrases from English to Russian and then back again. To make the test doubly hard, some sentences and phrases were deliberately chosen to be short, metaphoric, and free of other clues as to interpretation and meaning. One of them was: "Out of sight, out of mind." What came back into English, after retranslation, was: "Invisible idiots."

[¶28.] As computers become more deeply embedded as the underlying performers of a wide variety of social and organizational tasks, they are also growing more and more invisible. It is all too easy to forget that they are also idiots, having no information other than what has been supplied them and capable of doing no more than what was programmed into them. And they are no more capable of understanding or predicting indirect and long-term consequences than were their designers or programmers. They require constant, intelligent, and informed monitoring. Over time, they will be increasingly out of sight, but they must never be out of mind. Otherwise it is we, and not the computers, who will become invisible idiots.

NOTES:

1 Turkle, Life on the Screen.

2 Rheingold, Virtual Community.

3 Rifkin, End of Work.

4 There will, of course, always be human chefs and other personalized human services available to individuals in a high enough socio-economic class. Indeed, the scarcer such resources become, the more valuable, and the more likely that high-income individuals will pay for them as visible signs of status. This can easily be generalized to a wide variety of other human-centered activities that are otherwise displaced by computerization and automation. See, for example, Hirsch, Social Limits to Growth.

5 See, for example, Rifkin, End of Work, 104ff.

6 Moravec, "Pigs in Cyberspace."

7 Davidow and Malone, Virtual Corporation.

8 Demchak, "Information Age Army."

9 Adam Smith, "Derivatives."

10 Demchak, Military Organizations.

11 Karlsen and Oppen, "Professional Knowledge"; Gullers, "Automation--Skill--Apprenticeship."

12 Negroponte, Being Digital.