All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World-Wide Web.
[¶2.] In his learned and lucid book, Autonomous Technology, Langdon Winner argues that the historical theme of technology-out-of-control misplaces and disguises human responsibility for outcomes by assigning to artifacts instead of people the processes of volition and choice.1 The social and institutional processes that have shaped and controlled the introduction of computers into a wide range of social and organizational settings are no more autonomous than any other. However, they do seem increasingly to be driven not by the needs and demands of users, or of society at large, but by an internal dynamic that originates in and is sustained by the industry and its "designers"--those people who specify how and where computers will be used, with what hardware, and with what software and interface. Progress in electronic digital computing is not an autonomous process, but it has increasingly become an autogamous one--self-pollinating and self-fertilizing, responding more and more to an inner logic of development than to the desires and needs of the user community.
[¶3.] The computer industry is not like traditional industries in which the normal course of events once maturity is reached is incremental evolution and struggle for market share on the margin. The companies that drive and dominate the computer industry market, and the people they hire, are committed to innovation and change, to rapid market growth and new products. Lacking any new breakthrough or killer application that would create a body of user demand on the scale of the spreadsheet, they race with themselves, and each other, to create new variants and add more gadgets. As each generation of hardware gets faster and more powerful, each generation of new software more than consumes the incremental gains, driving users to search for even newer and more powerful machines, and so on ad infinitum.
[¶4.] How did this come about? There are few answers in the existing literature, for the question is rarely asked.2 I do not presume that I can fully address it here; that would require, at the minimum, not just an extension, but a reformulation of the present theory of the development and evolution of large technical systems.3 The social history of electronic digital computing and networking does provide some data and a few examples that may prove useful for the future development of a more comprehensive analysis. It also provides the essentials for understanding how the later interconnection of computers to form computerized networks of immense capacity and scope was able to so easily impose on the newly decentralized web of users elaborate sets of standards and rules that would severely limit their discretion and choice even as it was being promoted as a means for opening up the world.
[¶6.] The history of computer hardware is in itself fascinating, and important. Without the incredible pace of hardware development, miniaturization, and decreased cost that followed the introduction of solid-state electronics, there would have been no computer revolution, and the social and political effects on which this book focuses would never have been possible. But hardware in itself cannot be an agent for change. In the case of the computer, coevolution with software made the difference; despite market-driven races to develop faster and ever more powerful hardware, and despite fierce and often nasty competition between suppliers of software to capture market share through specific features, it is the nature of the application, not the machine, or the programming, that attracts potential users.
[¶7.] As with nuclear energy and a number of other fields that had once been the subject primarily of scientific inquiry and discourse, the development of the electronic digital computer was greatly spurred by the Second World War. Although the purposes were at first narrow and specialized, the concentration of the research community and the comparatively huge resources given to it were to form the foundation of the postwar effort, with continuing governmental support. The government also continued to support the initial development of programming languages and other fundamental software that became central in remaking computers into general-purpose analytical tools, electronic successors to Charles Babbage's dream of a generalized "difference engine."4 Over time, the capabilities were transferred for use by developers and government agencies, and then for large, central firms doing considerable government business.5
[¶8.] Those who got their start in one or another of the early military projects, or consulted for them, eventually moved out into private industry in search of uses, applications, and markets. But the military roots remained, and it was not until the 1960s that commercial markets grew large enough to support a fully independent industry. As the computer revolution picked up speed, entrepreneurs and just plain technical enthusiasts moved in, and most of the hardware and software now in common use was developed specifically for commercial applications. Much of what remains of direct government support is focused on the formulation of specific equipment and specific software for dedicated purposes.
[¶9.] But the U.S. government continues to see the development of advanced computer techniques as having important implications for national security (i.e., military) purposes, and therefore remains heavily invested in supporting fundamental, forefront research, particularly in high status areas such as artificial intelligence. This also has consequences. The community of government-supported superstars, often far removed from real users or immediate needs, continues to shape the attitudes of programmers throughout the entire industry both by example and by its ability to shape the goals and beliefs of the social universe in which designers live and work.6 And that universe in turn was formed and shaped by the history of the industry.
[¶11.] The first large, digital computers were hardly more than electric realizations of Charles Babbage's difference engine (and many were barely electronic). Their purpose was primarily to perform huge calculations rapidly and accurately (i.e., what they computed were numerical results, such as firing tables for large artillery and naval guns), and the operation of each was done through a tedious and usually quite idiosyncratic process of setting up a calculational program through the setting of an enormous number of switches and patches.7 In those early days, progress was measured largely by increases in the size and capacity of the machines. As it was widely believed that the total social demand for computers at this scale was for no more than a few dozen, or a few hundred, machines--large, expensive, and run by in-house experts--there was no particular reason or economic motive for the development of general, nonproprietary software or simplified front ends for operation.
[¶12.] The invention of the transistor and the creation of the integrated circuit caused a major reorganization and redesign of the nascent industry. Smaller, faster, more reliable, and with much simpler requirements for power and cooling than their vacuum-tube predecessors, machines using solid-state circuitry revolutionized every aspect of electronics from simple radios to the most complicated computers. In only a few years, the huge, ungainly machines that gave off enormous quantities of heat, and whose time of continuous operation between failures might be measured in hours, or at best days, were completely superseded by solid-state, integrated-circuit machines that were inherently faster and far more reliable.8 The failure rate of the first solid-state computers was orders of magnitude smaller than that of their vacuum-tube predecessors.9
[¶13.] Because of their size, cost, and complexity, the commercial manufacture of the first mainframe computers was a task not to be entered into lightly. The field soon came to be dominated by a few large firms, including a preexisting corporate giant--IBM--that gained knowledge from government contracts and access through the reputation and experience it had gained from dominating the market of electromechanical office equipment. IBM controlled half the market by 1955, only two years after its entry, and more than 65 percent by 1965, when the business had become known as "IBM and the Seven Dwarfs."10 Although other companies continued to compete, it was IBM that determined the course of development.
[¶14.] The situation as of the mid-1960s has been neatly summarized by Larry Roberts.
In 1964 only large mainframe computers existed, each with its own separate set of users. If you were lucky the computer was time-shared, but even then you could not go far away since the terminals were hard-wired to it or connected by local phone line. Moreover, if you wanted data from another computer, you moved it by tape and you could forget wanting software from a different type of computer.11
[¶16.] Such standards as existed for moving data about were mostly set by IBM, which had adopted its familiar accounting technology, the Hollerith punched card (modeled in turn on the cards used for more than a century to program Jacquard looms), as a method for providing input.12 Even the later development of magnetic storage and standardized programming did not free users from being tied by their computers and terminals to a set of very restricted environments.
[¶17.] Technical and systems evolution had gone down the same socio-historical path as the classic infrastructure technologies of the early part of the century, with one important exception. Although expansion and growth were still controlled by managers who judged them by actual performance and real returns, the large, centralized, computer centers that emerged, and still dominate many companies and government organizations, seemed forbidding, remote, and, with their cadre of experts speaking arcane languages, sometimes threatening. The combination of the isolation of the centers with the presumed power of computers they operated became a focal point for a public that increasingly felt that technical change was threatening loss of control in "modern" societies. As such, they became the obvious target of manifestations of social concern about the future, in forums ranging from editorials to popular art.13 What few people realized was that the mainframes were dinosaurs, soon to be almost completely displaced except for specialized purposes such as running huge, complex mathematical models.
[¶18.] What did survive, however, was a rather unique social-organizational legacy, the creation and empowerment of a small cadre of hardware and software designers and highly trained operators whose special skills and special languages isolated them from the rest of the organization and left them free to pursue their own design and developmental goals.14 Given the cost of the new computer centers, and the need to justify both their existence and their budgets, these internal desires were always subject to rather strict limitations. But the precedent of autonomy and control had been set.
[¶20.] Progress in the semiconductor industry, driven in part by the search for defense-related applications, proceeded at a ferocious pace that has not yet let up; every year, components continue to get smaller, cheaper, faster, and more complex.15 During the 1960s, solid-state circuitry had progressed sufficiently to significantly lower the costs of entry into the burgeoning computer market, triggering a second developmental wave that was to carry the computer industry far from its megalithic beginnings in less than a decade. Although IBM continued to exploit the decreasing size and cost of solid-state circuitry to improve and expand mainframes, a new entrepreneurial firm, Digital Equipment Corporation (DEC), chose to avoid competing with IBM by producing a line of "mini" computers--small but powerful laboratory and business machines that did not require specially prepared and conditioned rooms. Eventually, the minicomputers were to become small enough to actually be placed next to a desk, or even on top of one.16 More to the point, they were to expand the community of programmers to include a large number of people independent of both hardware manufacturers and large, centralized computer operations.
[¶21.] Because the DEC machines were particularly flexible and powerful, and because DEC welcomed participation and involvement from the research community, their machines quickly became popular in laboratories and universities.17 With the introduction of UNIX, an adaptable and open operating system that had been developed by AT&T's Bell Laboratories and widely disseminated for a nominal fee, DEC/ UNIX systems and networks became ubiquitous in research laboratories, universities, and, eventually, classrooms across the country.
[¶22.] Mainframes were fine for grinding out big, difficult calculations; as a means for communication, the new art of text processing, or performing simpler tasks, they were at best clumsy. The relative openness and transparency of the UNIX system, the power and simplicity of the high-order programming language (C) that had also had been developed at Bell Laboratories and in which it was coded, and the incredible facilitation of interpersonal networking, at first at individual sites and then among them, created expectations and demands in the community of sophisticated users that could not be easily fulfilled by centrally controlled hierarchical computer centers and large, powerful mainframes.
[¶23.] The social consequences were profound. Intelligent, mathematically skilled, eager to make use of the newly accessible power of computing in their laboratories and even classrooms, and devoted to the new means of access and communication, the new community accepted as part of the cost their own lack of control over system development. Although participation by users in system and interface design had raised expectations about devolution of control and decentralization of authority that were very much at odds with the mainframe tradition, it also paradoxically reinforced the ceding of control over system and evolution and development to those who were physically in charge of the computers. If DEC had a new machine, or if Bell Labs or its own computer center had a new release of or improvement on UNIX, it was the computer center and not the user community who dictated when and where it would be installed or adopted.18
[¶24.] The long-term, probably unintended, but possibly deliberate, consequence was the emergence of a community of programmers who demanded respect, and even acquiescence, from the user community at large, while insisting on their own independence and autonomy from large companies and organizations, including, at times, those who owned and operated the facility at which they worked. This complex of behavioral patterns, added to and reinforced by the tradition of autonomy that characterized the mainframe computer centers, are a legacy of the origins of digital computing that persists to the present day.
[¶26.] While researchers back East were concentrating on the minicomputer transformation, an eclectic collection of electronic tinkerers were working in garages and workshops around the San Francisco Bay Area on something even smaller. Many had dropped out of major computer corporations to pursue the dream of a computer that was entirely one's own,19 often overlaid with a libertarian philosophy that blended the radical and communitarian thought that emerged during the upheavals of the 1960s with the traditional American dream of the independent small business entrepreneur.20 Of necessity, the resulting machine would have to be relatively small, simple, and inexpensive; easy to maintain and upgrade; and convenient to program and operate.
[¶27.] In 1971, a small Silicon Valley company called Intel announced the result of two years of research--an integrated circuit that put the essentials of a computer on a single chip. Christened the microprocessor, the Intel chip was fully programmable despite its small size. The microchips seemed destined to the arcane world of pocket calculators until, in 1974, Ed Roberts decided to build a computer kit.21 When the Altair hit the market in 1975, the response was almost frenzied. The following year, 1976, was the annus mirabilis of the microcomputer transformation. From Silicon Valley and its surroundings flowed the commercial realizations of the intelligent video display terminal and the miniaturized floppy disk, the first standardized bus, BASIC, CP/M, the first programming languages for microcomputers, Electric Pencil, the first microcomputer word processor. And linking them all together was the Homebrew computer club, the irreverent, anarchic, thinktank of the new industry.
[¶28.] The third wave of computing emerged from what was quite literally a garage operation in California, when Steve Wozniak designed the Apple I, primarily to impress the club.22 The Apple I was hardly more than a circuit board, but its successor, the landmark Apple II of 1977, was the prototype of every desktop machine. Using a keyboard for input instead of toggle switches, with a video display system instead of blinking lights, and with a small, flexible (floppy) magnetic disk for storage, it is as recognizable to the modern user as a Model T--to which it might be aptly compared, for the widespread adoption of the Apple II and the word spread by its dedicated users reconstructed the meaning and image of electronic digital computing.
[¶29.] Given that DEC and others had already appropriated the term minicomputer for their now midsized models, the Apple and its descendants came to be referred to by the somewhat inappropriate appellation of microcomputers, perhaps because of the microprocessors that lay at their heart. The most familiar term in use today, however, is the one that IBM appropriated in 1981 for its first ever desktop computer--the personal computer, or PC. At first, the person in question was far more likely to be a computer professional or dedicated hobbyist than a typical office worker or member of the general public. The hardware was nice enough, but what was it for?
[¶31.] In the world of mainframes and minicomputers, the proprietary nature of operating systems was to some degree compensated for by islands of standardization in programming software, some promoted by the government and some by business and corporate interests.23 Having been deliberately developed outside of those worlds, software and operating systems for the first personal computers were even more chaotic and idiosyncratic than the machines themselves. At one time, for example, almost every manufacturer, including those who had standardized on the same Intel chip and the same underlying operating system, used a unique format for floppy disks. Exchange of software and data between users was a trying and often frustrating experience--when it could be done at all.
[¶32.] As with the historical cases of automobiles, electricity, and telephones, increasing acceptance and use was accompanied by a demand for standardization. Over time, two primary standards emerged--that of Apple computer, closely tied to and integrated with its own hardware, and the more open system that serves not only the descendants of the first IBM PC, but the world of clones that now dominate the microcomputer market.
[¶34.] In the early 1970s, a group of researchers at the Xerox Palo Alto Research Center (PARC) was pursuing a vision of the future of computing that was radically different from that of IBM or DEC.24 The market niche that Xerox had originally sought to penetrate was the growing market for powerful, specialized desktop computers, primarily graphic workstations for the growing sector of electronic computer-aided graphic design (CAD). At Xerox PARC, the basic unit of computing was therefore taken from the outset to be the individual user workstation. The researchers at PARC were encouraged to break with the notion that the computer user community was a narrow and highly specialized one. Whatever they developed had to remove, or at least greatly reduce, the burden of learning and memorizing long lists of commands in order to master the system.25
[¶35.] The PARC researchers sought to provide users of desktop computers with an interface that featured simplicity of organization and command to mask the growing complexity of operating systems and software. The traditional command-line and character-based interface of the early machines was leading to ever more arcane and elaborated sets of commands as machines and programs evolved in capability and sophistication. Manuals grew thicker and more complex; a whole secondary industry grew up around publishing books to supplement the traditionally horrible manuals. Particularly in commercial applications, organizations found themselves once again driven to hire professional programmers to master the software.
[¶36.] The solutions proffered by the PARC crew are now legendary. The Alto, completed in 1974, but not marketed commercially until 1977, had the first production bitmapped screen, now familiar to every computer user, instead of the character-based system used by the old video terminals.26 It had an interactive, video-oriented menu interface and a graphic pointing device to move around in it. And it was the first computer ever advertised on television (in 1978). Although Xerox never marketed it effectively, and only a few thousand were ever sold, the idea of a bitmapped screen capable of doing graphics instead of a character-oriented one that was the video equivalent of a teletype was very, very attractive.27
[¶37.] As Apple computer matured, it incorporated many of the features developed at PARC, along with some of its staff and many of its attitudes. The mice-and-menus approach that became the characteristic signature of the Apple Macintosh line was created by a cadre of devotees who regarded the old method of input by keyboard and character as totally archaic and impossibly linear. To optimize performance and speed, and provide a consistent, iconic, graphically oriented interface required a very sophisticated operating system, which was almost completely shielded from the user. Indeed, until quite recently, Apple ran the equivalent of a proprietary shop, refusing to divulge some elements of its hardwired code except to developers under very strict license agreements.
[¶38.] Apple's attitude toward users is familiar to those who followed the history of DEC, or of PARC. In principle, easily understood interfaces and sophisticated but user-invisible processing free users from the need to understand cryptic and arcane commands, or to learn much of the inner details of the machine. In practice, what it creates is an asymmetric dependency relationship, in which the designers and programmers are free to do what they feel is right, or necessary, and the user has little choice other than to accept it, and stay current with the latest version of software, or to reject it and drift down the irreversible path of obsolescence.
[¶39.] Apple's closed-system approach, defended by aggressive lawsuits against those attempting to market clones, protected its market niche, but also kept it narrow. Although Apple also benefited from the rapid growth of the 1980s, its base of loyal customers remained at about 10-12 percent of the total desktop market, with an even smaller share of the commercial and business sectors.28 As a result, Apple finally opened its system to developers, generating the first Macintosh clones. But the relationship between users and operating systems remained the same. Moreover, it has spread into the other community of PC users, casting a long shadow over their dream of open and accessible systems.
[¶41.] Apple may have generated the vision of individual, personal computing, but it was the IBM PC of 1981 that was to spread that vision widely and carry it into the world of commerce and business. Instead of keeping the machine specifications closed and guarded, and writing its own operating software, as it had traditionally done for mainframes, the IBM personal computer division chose an open-system approach, making its hardware standards widely available and buying an open operating system (DOS) from an independent vendor, Microsoft.29 Secure in the belief that IBM would dominate any computer market in which it competed, the company was willing to create a standardized market in which independents could develop programs and manufacture disks without having to acquire special licenses or sign nondisclosure agreements, for the sake of faster and more expansive growth.30
[¶42.] The rest, as they say, is history. The standardized, reliable PC with its new operating system, carrying the respected logo of IBM, was welcomed into homes and offices with an eagerness that no one in the industry could ever have imagined. Within only a year, the term "PC" had entered the common language as the generic term for a desktop microcomputer. Sales soared, and by 1983, Time magazine had a computer on its cover as the "machine of the year." Within a few years, PC systems with MS-DOS and standardized disk formats and software had created a whole new market, pushing CP/M and other competing systems aside. Small, entrepreneurial companies either adapted or found their market share shriveling to a narrow and specialized niche.31
[¶43.]
[¶44.] In 1981, when you said PC, you meant IBM. By the 1990s, the future development of PCs was largely in the hands of Intel, maker of the continually evolving line of processor chips that lay at the heart of the machine, and Microsoft, maker of operating systems and software. The evolving interactive complex of Intel chip code and DOS (and Windows) operating system was, in principle, still open, as it had been from the outset, but the gradual layering of sophistication and complexity required to make use of the new capabilities made other developers de facto as dependent on Microsoft's system programmers (and Intel's hardware plans) as they were on Apple's.
[¶45.] With each new release of Windows and the new wave of software to accommodate it, PC users grew more remote from and unfamiliar with what was once an operating system of remarkable transparency. Moreover, as Microsoft grew larger, the development and release of new versions of its operating system software no longer seemed coupled to user demand at all. What users wanted were fixes for existing products; instead, they got new ones, as often as not bundled in automatically with their new hardware, which of course was required to make full use of the new software. It is clear that there is a feedback loop running between Intel and Microsoft, but it is not clear that end user demand has any real place in it.33
[¶46.] Whatever the rhetoric of individual freedom, autonomy, and "empowerment," users are increasingly at the mercy of those who design and set the standards of the machines and their operating systems. There are now basically only two important standardized operating systems for individual desktop systems in common use, each with unique, and standardized, input-output and disk operating subsystems.34 And although they still claim to represent the two opposing world views of personal freedom and the individual entrepreneur versus corporate conformity and the multinational corporation, both the interfaces and the outcomes are converging rapidly.35
[¶47.] As the systems grow more complex, the dependence of users on a network of subsystems of design, programming, maintenance, and support to keep their machines and operating systems current and functioning continues to increase. As recently as a few years ago, many users still continued to try a little bit of programming, if only for the excitement. But the structure of Apple's System 7, or Windows 95, is as inaccessible to most users as it is invisible, and as far beyond their ability to influence or control as it was in the days of the mainframe computer centers.
[¶49.] Good hardware and a good operating system make for a smoothly functioning machine, but the question of what the PC was actually for remained open. What really spurred the market for personal computers, Apples as well as PCs, was the advent of the first "killer" application, the spreadsheet. Spreadsheets allowed serious business and corporate users to do financial and other modeling and reporting on their own individual computers rather than going through centralized data processing and information services, while being simple enough so that even relatively unsophisticated home users could build numerical models or balance their checkbooks on it. And they could graph the results, automatically. The resulting surge of user-driven demand was unprecedented, almost completely unexpected, and has not been repeated since.
[¶50.] Word processing was an interesting task to perform on a computer, but not fundamentally different from what could be done (at least in principle) with whiteout, scissors, and paste. Once graphics were included, the spreadsheet was a nearly perfect expression of what you could do with a computer that you could not do without it; it combined the computer's specialty, memory and calculating power, with the most powerful and most difficult to replicate power of the human mind, the ability to recognize and integrate patterns. Once users discovered it, they became committed; once committed, they became dependent on it; once organizations became dependent on it, they demanded compatibility and continuity, firmly locking the existing Apple and IBM/DOS standards in place.
[¶51.] The irony of the Homebrew club is that the success of this anarchic collection of independent thinkers and libertarians created a bottom-up demand for de facto standardization whose results were not much different from those traditionally imposed from the top down by large, corporate manufacturers. Without the hardware being developed by the club, spreadsheets would never have been developed, and the market for microcomputing would never have expanded so rapidly.36 Programmers and companies have been searching ever since for the next "killer app," an application so powerful, and so attractive, that it will cause another boom in the market similar to that experienced in the early 1980s.
[¶52.] Meanwhile, success has locked the systems in place. Users demanded interchangeability and interoperability, as well as compatible disks. Just when the industry was in its major growth phase, providing immense amounts of capital for innovation and change, computer user magazines were using full compatibility as a major benchmark for evaluating new programs and new machines. With no dramatically new application on the horizon, and no reasonable way to reconstruct systems at the basic level, innovation and ingenuity, which remain one of the prime cultural values of the programming community, seem to have turned into a race for features instead of real capabilities. The greatest irony of all is that it is now the features, and not the applications, that are driving the machines not only to greater power and complexity, but to greater uniformity and standardization.37
[¶54.] The historical process of interactive growth and expansion that characterized earlier technical systems such as the railroads or the telephone can be described as the interplay between two realms of activity: the scientific and technical exploration of physical and engineering possibilities on the one hand, and the managerial innovation and social imagination of entrepreneurs, operators, and even regulatory authorities on the other.38 The balance shifts with time, between the search to exploit new capabilities and the effort to develop the equipment to follow up new opportunities, but in almost every case the nature of the interplay is relatively clear, even when causality is disputed.39
[¶55.] Broadly speaking, every large technical system has followed the familiar logistic curve, with market penetration beginning slowly during the period of development of techniques and markets, rising rapidly during the exciting period of social acceptance and deployment (often referred to as diffusion), and then flattening out as markets saturate. Social and economic change is small in the beginning, but although direct social change is likely to be greatest during the period of most rapid adoption when the rate of diffusion is highest, there is some dispute as to whether economic gains are also most rapid at this time or tend to occur at some later time after the growth curve has started to flatten out. Paul David, for example, argues that maximum returns from new techniques occur only after they have a chance to mature, and claims that we are still ten or twenty years from realizing the productivity gains from digital computing.40 Others argue that using such traditional gauges as net capital investment or comparisons with traditional industrial machinery underestimates the maturity of fast-moving sectors such as the computer industry, and that the lack of productivity gains cannot be attributed to "immaturity."41
[¶56.] The difference between computers and computerized equipment and hardware such as electric motors, dynamos, chemical processes, and production machinery is not trivial. When the rate of change in the introduction of traditional technical-industrial innovations slows, markets saturate, and entrepreneurs and others seeking rapid profits and high rates of return turn elsewhere, as do the interests of the cutting-edge engineering community. The maturing industry is left with slow-growing, if predictable, sales, and comparatively small incentive for further innovation.
[¶57.] The experience with the adoption of computers, however, has been markedly different. Sales continue to expand rapidly even in markets that are numerically saturated; although the rate of diffusion has been high, there have been no apparent gains in productivity; although more and more workers have computers, few people feel that they have really mastered them; although we know more and more about what they can do, we seem to know less and less of what the consequences will be. Nor does there seem to be any way to determine whether some of those consequences are unintended, if perhaps deliberate, attempts to keep sales up in a mature system, the result of turbulent growth in the period of maximum innovation, or reflect a state of technical immaturity despite high market penetration.42
[¶58.] It is not sufficient to claim that computers are different because of the special character of their capabilities, or unique because of their importance in fostering social development and/or change. There have been other technical changes of equal importance, ranging from the printing press to television, from the moldboard plow to the harvester combine, that were in their own way just as unique, just as pathbreaking, and just as important for social change, yet still fit the traditional models of diffusion and deployment pretty well. But in each of these cases, the process of adoption and diffusion was governed by use and not by design, by measurable and demonstrable gains rather than future promises.
[¶59.] Logistic analysis of market penetration is not well suited for analyzing rapid growth accompanied by substantive structural change, particularly when adaptation and learning are as demanding and intellectually challenging as that of personal and office computing. The familiar
[¶60.] The assumptions that there will be only modest technical change during the rapid growth phase, and that in any case the processes of individual and organizational learning are fast and responsive enough to provide continuous adaptation during the period of diffusion and adoption, are primary, and crucial, for understanding why the case of computing is different. Learning, too, tends to follow a logistic curve. There is very little gain at first for a large investment of time; eventually, enough basic knowledge has been accumulated to allow for rapid gains (the "steep" part of the learning curve). Once mastery is gained, further improvement again requires a fairly large investment of time and energy.
[¶61.] The implicit assumption of the traditional diffusion model is that people, and organizations, are dynamically capable of learning to make use of railroads, or telephones, or data banks, fast enough to accommodate and to understand reasonably well the state of the technical system at any time, even during the period of rapid growth. As a result, demand growth is governed by evaluation of real and well-understood costs and benefits. Growth in demand in turn invites competition and opens new markets, stimulating innovation and change, which in turn triggers another round of learning and adoption.43
[¶62.] None of these assumptions appears to hold very well, if at all, for the recent dynamic of development and deployment of small, powerful digital computers, particularly in smaller offices and businesses. Only in a few specialized markets are new developments in hardware and software responsive primarily to user demand based on mastery and the full use of available technical capacity and capability. In most markets, the rate of change of both hardware and software is dynamically uncoupled from either human or organizational learning logistics and processes, to the point where users not only fail to master their most recent new capabilities, but are likely to not even bother to try, knowing that by the time they are through the steep part of their learning curve, most of what they have learned will be obsolete.44
[¶63.] Logistic analysis, like other traditional economic tools designed to measure growth and expansion of traditional markets, is an equilibrium tool, based, among other things, on the assumption that it is user demand that drives change and growth, and not the community of developers, builders, and programmers. But there is no sign of equilibrium in the current race between hardware capacity and software demands; each generation of new software seems to easily consume the new margin of performance provided by faster and more powerful machines, providing only marginal gains in real performance while emphasizing additional layers of often useless embellishments.
[¶64.] The need to understand and accommodate the new variations and options, the new manuals and auxiliary programs, the new set of incompatibilities and exceptions, and even, in many cases, the lack of backward, interplatform or interprogram compatibility, keeps users perpetually at the foot of the learning curve and struggling to survive, let alone adapt.45 The net result is growth beyond the logistic curve, saturation without maturity, replacement without obsolescence, and instant obsolescence built in to every purchase. As Landauer puts it:
[¶66.] Once invested in computerization, individuals and organizations seem to have acquired the Red Queen's dilemma, running as fast as they can just to stay where they are, ensnared by a technical system whose dynamic properties seem to be determined by factors beyond their reach. And, as I will argue further in the following chapters, staying at the low end of the learning curve can in itself be the source of long-term, unanticipated consequences.
[¶68.] The rapid diffusion of the personal computer that began in the 1980s was a classic case of interactive social change that reconstructed the computer industry as well as the social definition and context of computer use. What did not substantively change, however, was the gulf between the goals and culture of system and software designers and the needs and desires of the general user community. Landauer has expressed it well:
[¶70.] Landauer's book contains numerous examples, ranging from proposals for "heads up displays" to project instrumentation on automobile windshields to the case of the mainframe designers who refused to talk to operators for fear of being distracted by them.48 The argument may seem a bit overstated to make a point, but even within the industry, interviewing end users and taking their needs and desires into account is still considered remarkable.49
[¶71.] Why do designers act this way? That none of the authors read or cited here provides a satisfactory explanation is not surprising. Most of those who have pointed out the problems, including Landauer, are not really outside the community of developers and designers at all. They are dissidents within it. For all the extensive research that has taken place on users and user communities, there has been almost no systematic research done on designers and the community of design.50 There is a growing reform movement based on the notion of user-oriented or user-centered design, which does at least accept as a founding principle the need to find out what it is that users want, or need,51 but it is not clear how widespread an effect this has outside the circle of thoughtful critics. And even participatory design has been criticized for being as much a normative design model as the ones it sets out to critique, a way to give users power over designers without asking what the formal context is.52
[¶72.] Designers and users come from different communities, they work in different environments and different organizational contexts, they have different goals and different means. And, for reasons that emerge partially from the history described earlier and partially from the different contexts of design and use,53 the special knowledge possessed by designers gives them a privileged position in the dialogue.54 Users can have input, and even give advice, but generally from a subordinate position. As will be discussed in chapter 7, the question of user authority, or real user control over whether new techniques are introduced (rather than how), is rarely raised as a realistic option. And that, too, has long-term consequences for operability, reliability, and performance.
[¶73.] Another, perhaps more difficult, question is why users continue to put up with this. Arguments based on differences in power or status do not fully explain what happens to individuals, or in smaller organizations. To some extent, the more general willingness to accept the rash of technical tweaking and exciting but nonproductive applications and embellishments as a sign of progress is a social phenomenon, not that distant in its sociology and politics from the long-standing American relationship with that other icon, the automobile. But that still does not fully explain why there are so few pockets of resistance where old machines running old software that does the job perfectly adequately are maintained and used, or why corporate and business users appear to be even more vulnerable to the argument for new hardware and software than individuals.
[¶74.] If there is an explanation at all, it must lie with collective social behavior. Plausible causal factors range from simple desire to stay at the cutting edge of technology (without being able to have much input on defining it) to fear of losing ground to the competition if they have even a small marginal advantage. In some markets, and for some industries, that might make sense. In many, perhaps most, it does not. Instead, what seems to provide the impetus at the moment are the demands of new systems for computer interconnection, for global as well as local networking. The mainframe provided a central store of information and a central means of communication that was lost in the fragmentation of one computer for every user. For many firms, the first result of the desktop revolution was the replacement of an integrated and manageable computer system with an idiosyncratic collection of individual pieces of relatively isolated machinery. As will be discussed in the following chapter, it was the subsequent development of networks that provided both an impetus and a means to recapture users and rerationalize the technical variance of offices and businesses.
NOTES:
1 Winner, Autonomous Technology.
2 A notable exception is the recent dissertation of Kären Wieckert, whose empirical and theoretical study of designers contrasts the "context of design" from the "context of use." Wieckert, "Design under Uncertainty."
3 I explore these ideas more fully in Rochlin, "Pris dans la toile" (Trapped in the web).
4 See, for example, Edwards, Closed World, Flamm, Targetting the Computer.
5 The fascinating story of the evolving relationship between the U.S. government, IBM, and MIT that grew out of the SAGE (Semi-Automatic Ground Environment) project is told in some detail in Edwards, Closed World, 142ff. Prior to its choice as prime contractor for SAGE, IBM had no experience at all in the computer field. By the time the SAGE contracts were played through, it was dominant.
6 One only has to attend a large computer conference to witness the size and makeup of the crowds who attend talks by such research leaders as Negroponte to verify this observation.
7 See, for example, the graphic description in Shurkin, Engines of the Mind, especially p. 171.
8 Davidow and Malone, The Virtual Corporation, 36ff.
9 Shurkin, Engines of the Mind, 301ff. Vacuum-tube computers were of course not only large in size but generated enormous amounts of heat (as a graduate student, I helped build one of the first transistorized computers in two relay racks at one end of a gigantic room that had more air conditioning capacity than the rest of the research building). More troublesome was the nongaussian distribution of failures as a function of lifetime, which guaranteed that failure of a new tube was in fact more probable than that of one that had been in long use. See, for example, the excellent discussion in Edwards, Closed World, 109-110.
10 Shurkin, Engines of the Mind, 261.
11 Roberts, "The ARPANET."
12 Shurkin (loc. cit.) also has an excellent and entertaining history of the punched card, and of Herman Hollerith's punched-card census machines. Hollerith's company eventually merged with others to form the Computing-Tabulating-Recording (CTR) company. In 1914, CTR hired a young salesman named Thomas Watson, who soon took control. In 1924, CTR was renamed International Business Machines. Forty years later, the computer giant being led by Watson's son, Thomas Jr., was still promoting the use of Hollerith punched-card equipment. Hollerith's technology, patented in 1887, dominated the market for almost a century.
13 George Orwell's classic vision of the computer as an interactive observer in the service of a totalitarian state has reappeared in countless stories, novels, and films (1984: A Novel); indeed, repetition has rendered it almost rhetorical. An alternative form, the central computer that asserts control on its own, has been a popular theme in science fiction for some time, e.g., Jones, Colossus, which was later made into the movie The Forbin Project, or HAL, in 2001: A Space Odyssey. The most creatively parodic vision of a computer-centralized dysfunctional society is probably the recent movie Brazil (Terry Gilliam, Brazil).
14 As the late David Rose of MIT once observed, dinosaurs had independent motor-control brains in their butts even larger than those in their head. When you kill a dinosaur, it takes a long time for the body to figure out it's dead.
15 Depending on the method of analysis, the real (constant-dollar) cost of a given amount of computing power has been falling at the rate of between 30 percent and 35 percent annually for the past thirty years (roughly a factor of ten every six years or so). A single transistor has fallen in price by a factor of 10,000 during that time; in contrast, the cost of a memory chip has remained almost constant since 1968--but their capacity has gone up by a factor of 4,000. The most remarkable progress, however, is an area that cannot be measured on monetary scales and has no analogue in any other industry. A single advanced large-scale integrated circuit chip selling for a few hundred dollars may have millions of microtransistors on it, and be capable of performance equal in capacity and exceeding in speed that of a mainframe computer of little more than a decade ago. It has become almost rhetorical by now to note that if the automobile industry had made the same progress as the semiconductor industry since the mid-1960s, a fully functional Mercedes Benz would cost less than five dollars; it would also be smaller than a pinhead.
16 Interesting narratives of the development of the minicomputer can be found in Kidder, Soul of a New Machine, and Olson, Digital Equipment Corporation. Pearson, Digital at Work, is a beautifully illustrated history of DEC that complements the exhaustive historical inquiry of Rifkin and Harrar, Ultimate Entrepreneur.
17 Other companies such as Data General and SDS quickly followed DEC into the market (Kidder, Soul of a New Machine), but they never achieved the same level of success.
18 My own memories of being a sometimes reluctant player in the rapid development of the now famous Berkeley Standard Distribution (BSD) version of UNIX remain quite vivid. From time to time there would issue by message from the computer center an announcement of a new release of the editor, or the formatter, or even the terminal definition program, that drove us not only to despair but to the center to pick up the new documentation. More than one user found that a year's leave from Berkeley required extensive relearning before it was possible to come up to speed again.
19 Of all the narratives of the early history of the PC, none is more amusing, or more idiosyncratic, than that of the pseudonymous Robert X. Cringely (Accidental Empires).
20 This is the essence of the story narrated by Freiberger and Swaine in their superb history, Fire in the Valley.
21 Ibid., 31ff.
22 Ibid., 212.
23 I thank Kären Wieckert for this observation, and for her help in guiding me through the maze of the early days of computer and software development.
24 The definitive history of Xerox PARC and the failure to market the Alto is that of Smith and Alexander, Fumbling the Future.
25 In the apocryphal story, very popular at PARC, a nameless programmer, failing to realize the editor is in command mode, types out the word "edit." The editor promptly marks everything in the text (e), deletes it all (d), goes into insert mode (i), and types the single letter "t"--which is now all that remains of the day's work.
26 Smith and Alexander, Fumbling the Future, 93ff. The display was 808 by 606 pixels, 8.5 by 11 inches, a form of electronic paper. The first image ever put up on a bitmapped screen was the Sesame Street Cookie Monster.
27 It was also, in computing power terms, very, very expensive. To this day, many university computer centers running UNIX have only limited graphics capabilities to reduce the load on their multiuser, time-shared machines.
28 Apple devotees tend to be disproportionately concentrated in the fields of education and research. To some extent, this is a reinforcing feedback loop, since as a result quite a bit of software specialized to those fields, particularly that making use of elaborate graphics, was developed for Apple machines. But it is also interesting to note that these are the people who have traditionally accepted their dependence, e.g., on near-monopoly mainframe or minicomputer companies, as the cost of getting what they want, and are not intimidated by a machine whose inner workings are not open or visible to them.
29 DOS stands for disk operating system, but PC-DOS (the version created for IBM) and MS-DOS (the generic form) also include a basic input-output system whose open standardization is even more important for the development of software.
30 Some have argued that IBM never did think this line of reasoning through, but just assumed that they would come to dominate the market because, after all, they were IBM. It has also been pointed out that many of the top executives at IBM never really believed that the personal computer would amount to much, and may therefore not have been paying that much attention to the details.
31 I omit here the entire history and evolution of specialized workstations for graphics and other design applications, such as those of Sun, Hewlett-Packard, and Silicon Graphics. Not only is their market fairly specialized, the machines, systems, and demands on user competence and training are closer in design and specification to minicomputers than micros, even if the cost differential has narrowed over time. I also omit discussion of IBM's OS/2, a PC operating system with many strengths that was poorly supported, and is fading into comparative insignificance.
32 It is also ironic to note that as Windows evolves from an interface into an operating system, it is becoming as complex and almost as obscure as Apple's latest System 7.5.
33 Indeed, Microsoft's attempts to dominate all aspects of software, from operating system to applications of every type and, perhaps, even telephone access to the Internet, have not only drawn the attention of government regulators, but caused some concern among users about their growing dependence upon a single company.
34 More powerful desktop or graphics workstations such as those from Sun and Xerox are basically miniaturized minicomputers, and tend to run their own OS or some variant of UNIX. The two dominant systems for desktops at the moment are Apple's System 7.5 for the Macintosh and Microsoft's MS-DOS and Windows. Other machines such as the Apple II and the Amiga have relatively small, specialized market shares, as do other PC operating systems such as Digital Research's DR-DOS, or OS/2.
35 For those who follow this stuff, 1995 was the year of irony. Apple moved to open up its system, not only allowing but encouraging clone makers, introducing a more compatible chip, and adopting a PC interface bus standard (PCI) for some of its new machines. On the PC side, Microsoft's desire to completely dominate the software market was being matched by Intel's drive to gain control over the hardware through a combination of aggressive marketing and buying up memory chip production.
36 See, for example, Rifkin and Harrar, Ultimate Entrepreneur, 213ff. Dan Bricklin designed VisiCalc specifically for the Apple II; Mitch Kapor created Lotus 1-2-3 specifically for the IBM PC.
37 As of July 1995, IBM belonged to about 1,000 standards organizations. Cavender, "Making the Web Work for All."
38 Hughes, "Evolution of Large Technical Systems"; Hughes, Networks of Power. Although similar to the arguments of Hughes and others in its account of the social nature and context of technologies, the arguments advanced here take a perspective that is oriented more around intraorganizational factors than toward the external interplay of organizations with knowledge, markets, and regulatory practice. Both of these approaches differ epistemologically and methodologically from social constructionist schools in their central focus on organizations and institutions rather than individuals as central elements of interactions. See, for example, Bijker, Hughes, and Pinch, eds., Social Construction of Technological Systems.
39 Hughes, Networks of Power, for example, has pointed out the importance for technical development of "reverse salients," in which the further expansion or improvement of the system that is desired or sought is held up by the requirement for new techniques or methods.
40 David, "The Dynamo and the Computer." David argues that one would expect productivity gains to increase markedly at about the 70 percent adoption point. What is disputed about computers in business is just what the actual adoption point is. Although office computing only represents 2-3 percent of net capital investment (compared with perhaps 20-30 percent for the electricity case he studied), it is not clear that this is an appropriate measure.
41 Landauer, Trouble with Computers, 103-104.
42 For a wonderful exposition of the difference between deliberate consequences and intended ones, see Osborn and Jackson, "Leaders, Riverboat Gamblers."
43 Even for those cases that have been criticized as examples of "autonomous" technology, out of human control, a careful look shows that it is the users and not the developers and promoters who are driving the system. See, for example, Winner, Autonomous Technology.
44 Landauer, Trouble with Computers, especially at 118ff. For another perspective, see also Wurman, Information Anxiety.
45 Landauer, Trouble with Computers, 115ff.
46 Ibid., 338-339.
47 Ibid., 169.
48 Ibid. The first example is on p. 170, the second on p. 318.
49 Wieckert, "Design Under Uncertainty." In a recent review of a new portable disk drive for personal computers in Byte, Stan Miastkowski commented: "Rather than use the old engineer-driven `build neat stuff and they will come' design philosophy, Iomega queried end user focus groups, asking potential customers what they wanted most in a removable-media drive." This was noted as being desirable, but not at all common ("Portable-Data Stars," Byte, August 1995: 129-131).
50 According to Kären Wieckert, whose dissertation focuses on this problem: "Surprisingly, there has been little careful study of the behavior of actual designers confronting authentic design dilemmas generated by concerns from the context of use, creating representations of those concerns, or resolving those concerns through the artifacts they are designing" (Wieckert, "Design Under Uncertainty," 12). The notable exception is her dissertation, whose empirical studies of the design process in three organizations are complemented by a subtle theoretical argument that separates and compares the "context of design" and the "context of use." That she has also been a professional designer of expert systems makes the study all the more unique, and more valuable.
51 See, for examples, Winograd and Flores, Understanding Computers; Norman, Things That Make Us Smart. This approach has been used extensively in Scandinavia. See, for example, Ehn, Work-Oriented-Design.
52 Wieckert, "Design Under Uncertainty," 107ff.
53 Ibid., 108.
54 Suchman, "Working Relations."
The Search for the Killer App
The Dynamics of Growth
While the price per megathing has plummeted, the cost of computing has oddly stayed nearly constant. An equivalent machine today costs a fraction of what it did just ten years ago, but you couldn't buy one, and your employees wouldn't stand for it if you did. Instead you get one with ten times the flops and bytes at about the same price. The price of software has not dropped much either, partly because the hardware has become so much more powerful. It is now possible--and irresistible--to write very much bigger programs with many more features.46
The Hegemony of Design
Unfortunately, the overwhelming majority of computer applications are designed and developed solely by computer programmers who know next to nothing about the work that is going to be done with the aid of their program. Programmers rarely have any contact with users; they almost never test the system with real users before release. . . . So where do new software designs come from? The source is almost entirely supply side, or technology push.47