All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World-Wide Web.
[¶2.] The installation and networking by research libraries of computerized catalogs and their associated powerful search and retrieval tools has been promoted by librarians, and by library schools, as a user-empowering technology. Instead of having to go to the library and physically search the drawers of card files, or wander the stacks and shelves, anyone needing to find an item can work in their office, or at their home, at their own convenience and on their own schedule. The ability to search by keyword, by title, or by subject has no parallel in the linear world of 3 x 5 cards impaled on brass rods in neat arrays in their wooden drawers. Moreover, the search can be extended to link other libraries that may have what you want even if your own library does not.
[¶3.] So far, so good. The computerized library search system at my own university was eagerly welcomed by faculty and students alike, even for those who are working inside the library (on some days there are lines waiting for a turn at the terminals). And even Berkeley, with one of the largest university libraries in the world, does not and cannot own everything in print. Networked library software opened the world for searches. The benefits were clear. There were, however, some unexpected costs.
[¶4.] Last year, my wife went to a group meeting about a new "information technology" soon to be available through the campus computer network. The claim was that these new search methods would be vital tools for scholars seeking access to journals and other information not available through the campus library computer system and its links to other networks. The demonstration turned into a scholarly mini-rebellion. Visual presentations were rated by most of the group as scholar-hostile. Keyword searches were trying.1 There were no help screens. "Hits" were listed in a frequency table using algorithms that were opaque and could not be modified. There was no way to scroll back through the screens of data other than restarting a search.
[¶5.] Gopher and other Internet-access tools were considerably better, but lacked any convenient means for doing layered keyword searches. Even the new visually oriented interfaces such as Mosaic and Netscape were not rated any more highly by the audience, who found the organization of the material opaque and the structure totally mysterious. The interface rules, complete with check boxes, mouse-selected icons and mouse menus, buttons, and hypertext links were considered to be more of a distraction and an annoyance than an aid to use.2 And for those in the audience still working in DOS, the nongraphical interface was incredibly clumsy.
[¶6.] As might be expected from the argument of the preceding chapter, the response of those giving the demonstration was to place the burden on the prospective users. Because the network was nationwide, its protocols and software were fixed. Adapt to the system and its rules. Learn the interface. Get a bigger, faster, computer, or a Macintosh (they are cheap). Work in the office over the Ethernet, or get a high-speed modem or even faster link at home. In short, there was no choice but to comply.
[¶7.] As pointed out by Nicholson Baker,3 there was more at stake here than efficiency or modernization. The enthusiasm with which librarians are discarding their catalogs once their computerized systems are up and running can only be explained as part of a deliberate reconstruction by the librarians of representation of a library. Across the country, library schools are becoming schools of "information science." As Baker notes, the hated cards continue to carry the image of Marian the librarian--female, mousy, graying, old-fashioned, and probably single. The technomodern image conveyed by the librarian as programmer and expert computer consultant is quite different.4
[¶8.] At one library after another, the rooms that once housed the card catalogs are being replaced with study tables or computer terminals. The cards are being burned, shredded, turned into cereal boxes or electrical insulation, and even used as decorations and wall coverings. The computer record was to be the master record. It took a while for the implications of that to sink in. It took even longer for us to realize that the ultimate fate of the card catalog itself was destruction. Once the task of transferring the information to the computerized data base was complete, the catalog was to be destroyed.5 But the information that the cards once contained is not necessarily all in the computer. Retrospective conversion of catalogs is expensive, and many of the country's finest research libraries, including mine, have not been able to pay to have all of the information on each card put into the database.6
[¶9.] What was also lost with the card catalog was respect for a certain way of looking for information that was, for many of us, a well-developed and efficient research skill. With experience, the card catalog was a flexible instrument of incredible versatility. Computerized data searches, on the other hand, are inherently structured. The first search might be refused (too many entries), or result in a dump of thousands of listings; an attempt to narrow it can be an exercise in frustration that would try the patience of Job.7
[¶10.] It is said that we increasingly live in an information society. With powerful desktop computers, CD-ROMs, and global networking, each of us now has potential access to much more material than any single human being can be expected to read, let alone to master in a lifetime--even within one's own discipline, field, or special area of interest. With this has come a new form of anomie and alienation "information anxiety" that is already spawning a cottage industry in techniques for restoring confidence that one's own limited range of knowledge and skill still counts for something.8
[¶11.] It is the network and not the computer that created this trend. Networking has increased the availability and accessibility of data by orders of magnitude: by one recent estimate, the Internet already makes available to even the casual user several million megabytes of online, public data--a digital "encyclopedia" roughly a thousand times the length of the
[¶12.] But this aspect of networking, however challenging, still lies within the domain where computers are only substituting for what humans cannot do very well--data storage and processing. The dimension that mattered, and surfaced indirectly during the meeting, was the degree to which one of the major social costs of networking was loss of user autonomy and control over the means and processes of access. No wonder the librarians felt empowered. Instead of providing the traditional service of ensuring access to the community on demand, they were now managing a vital and necessary resource. Whether they liked to admit it or not, the information specialists were defining, shaping, and, in some ways, controlling user behavior through standardization of process and procedure.
[¶13.] Circling out from this example are others that have become nearly invisible to us through their familiarity: the airline reservation systems used by almost every travel agent; box office and ticket reservations; and, less visible to the public but increasingly central to business and commerce, the public and private nets that foster financial trading and markets, inventory control and purchasing, and just-in-time and other management strategies designed to increase efficiency by eliminating the need to carry or manage inventories. Networks, it seems, are everywhere. The computer or terminal on your desk, at home or at the office, is not just a means for giving you access to the world of networks, but also a means for the networks to impose on you the standards for hardware, software, and behavior required to make them accessible and (presumably) useful.
[¶14.] In the days of the mainframe, computers were seen as carrying forward the original purpose of information technology as a means to maintain and improve the centralization of task and functional hierarchy that had created so many of the great industries of the century.10 But reducing fragmentation while keeping the benefits of individual workstations required the development of effective, reliable, and standardized local networks, tying independent users into an interconnected web that could be resubjected to central control. That parallel story has three main threads that converged in the 1980s into the web of connectivity that most computer users now take for granted.
[¶17.] The Defense Advanced Research Projects Agency (DARPA), formed in 1958 (as ARPA) to develop the basic research needed to promote the development of advanced technologies for defense applications, was the fountainhead of many of the subsequent developments in computers and computing.11 Over the years, DARPA gradually became the lead agency for federal funding for most mathematics and computer-related research. Among the projects it supported during the 1960s were the first experiments in timesharing, project MAC at MIT and a joint Bell Laboratories-GE multiplexed computer system (Multics).12 Although Multics ultimately failed, it strongly influenced the design of future operating systems, particularly with regard to incorporating timesharing as a fundamental attribute.13
[¶18.] DARPA was also interested in the development of computer networks for reasons that have profoundly shaped their subsequent evolution. The first was to allow the more efficient interconnecting of what were then relatively independent and mission-dedicated, defense-oriented computing centers.14 Until that time, the only way to connect was analogous to a telephone call; the computers established a dedicated link and transmitted back and forth along it. This was relatively inefficient because data transfer rates were slow compared to the speed of the machines. It was even less efficient for a timeshared machine because of the lack of coordination between the availability of the machine and that of the link.
[¶19.] The second purpose of DARPA was to use the net as an early experiment in "hardening" the U.S. communications system against the possible ravages of nuclear war by building a decentralized network that could continue to function even if parts of it were damaged.15 The underlying assumption therefore was that any single link in the network was unreliable; the communications links would have to be redundant and passive. As a result, the burden of managing the transmission of information was to be placed on the senders and receivers.
[¶20.] Proposed solutions for meeting the technical implementation requirements while meeting these objectives quickly converged on the notion of transmitting packets of information that contained within them a code indicating how the packet had been assembled, by whom, and for whom it was destined. To allow for redundancy and multiple channels, and to prevent tying up any one line for the long time needed to send a sizeable message at telephone speeds, some method would be needed to interleave them. The best way was to break up each message into small packets, each marked with an ID code that also indicated what part of which message it belonged to. What was then required was a method for implementing the idea technically, a means for switching between packets. Indeed, such a method, if successful, could also be applied to increase the carrying capacity of existing long-distance and transcontinental telephone service, which at that time was still largely restricted to transmission over cables of limited capacity and bandwidth.
[¶21.] With the idea firmly in hand, DARPA funding resulted in the first packet-switching experiments at MIT, and the subsequent development of the technology for packet-switched computer communications.16 It also funded the basic research into packet "collision control" that made possible the development of effective packet-switched communication. The experiments were so successful that DARPA moved almost immediately to standardize both hardware and protocols and put a system into place.17 By 1973, a nationwide system, ARPANET, was fully operational. The next step was the development of a set of "protocols" or rules for network transmission. The resulting combination of IP, the Internet Protocol, which standardizes the way in which the address of the potential recipient is coded, and TCP, the Transmission Control Protocol, which performs both the function of reassembling the packets into a coherent sequenced message at the far end, quickly became a de facto standard (TCP/IP) still in use today.18
[¶22.] In addition to achieving the primary goals of developing a robust and decentralized network whose performance was nearly independent of the source machine, DARPA's network initiative had several unanticipated spinoffs that would have a profound influence on the future of computer networking. Many of the Bell Laboratory staff who were to design UNIX had worked on DARPA projects. Robert Metcalfe, the principal designer of Ethernet, came from Project MAC, carrying with him the same goals of a universal, flexible network. And because most computer research institutions at that time were DARPA funded, and therefore entitled to use the ARPANET, it provided them with continental electronic mail, file transfers, and other services via a nearly invisible and instantaneous web of interconnection that became the standard to which other research communities aspired.19
[¶24.] The Bell Telephone Laboratory was famous for its opposition to centralized and hierarchical organization. Researchers were used to moving around freely, exchanging and sharing information and their latest results. When DEC made available its first minicomputers, the staff eagerly adopted them, and set about transferring to their new UNIX system some of the ideas about connectivity that had been developed in the Multics project. As a result, UNIX was designed from the outset to timeshare, while presenting to the network of independent users the appearance that each was operating his or her own machine.20
[¶25.] But that was still not enough to satisfy the Laboratory community. Suppose one had a memo, or data, or a program, that one wanted to communicate to or share with another. The solution was to allow the transfer of files between users--what looked like sent "mail" was in reality a direct internal copy from one account (subdirectory) to another.21 As the laboratory acquired more independent machines, each with its own system, it also seemed natural to interconnect them without at the same time subjecting them to centralized control. The importance of that small step cannot be overstated. It changed the relationship of users to computers completely. Instead of seeing themselves as part of a hierarchical tree under central control, the representation to the users was that of a distributed horizontal network.
[¶26.] When the UNIX-DEC combination was made available to colleges, universities, and research laboratories around the country, the new groups quickly moved from being passive recipients to active co-developers. In particular, the Computer Systems Research Group at the University of California at Berkeley had a set of expectations about connectivity that were not yet matched by the capability of existing systems. With contributions from other centers, the Berkeley group developed a complete package of built-in networking and communications capabilities that greatly fostered network communications between as well as within the various centers, and between them and other federally funded laboratories.
[¶27.] Networking was the outcome that the Berkeley group had been supported by DEC and government grants to pursue. The unanticipated, and, in the long term, perhaps the most socially important consequence came from the seemingly trivial and relatively simple extension of the network to make the computers easily accessible via telephone from anywhere on or off the campus. Within a few years, even those parts of the university community that were furthest removed from concern or interest about the future of computers or computing had become computer literate and connected.22 A whole generation of budding academics and political and business leaders in training was not only raised on the experience of easy, direct access to powerful computing abilities, and each other, but on the idea that the potential uses of the new methods of connectivity had yet to be fully explored.
[¶29.] Researchers at Xerox PARC took an entirely different approach. Although at least as interested in intercomputer connectivity as their predecessors at the Bell Labs had been, their dedication to stand-alone workstations meant there was no central computer to carry the communications processing load. Nor could they follow the UNIX approach, since the users really did each have a completely dedicated and independent machine. How then to build a local-area network (LAN)? In existing LANs, the management of communications traffic was handled by the central processor essentially along the lines of traffic flow control. Without a central node or machine, the flow of signals would become an unidentifiable jumble.
[¶30.] The elegant solution the PARC researchers ultimately developed in the mid-1970s was distribution of the traffic control functions across the network. Through this method, individual machines are all connected to a single, high-capacity loop, the Ethernet, which carries many simultaneous messages around and around past all the terminals as interleaved packets of information, each with a code to identify the sender and the intended recipient. Common circuitry and software installed at individual workstations watches the packets coming by, picking off the ones destined for it (and removing them from the loop). When a machine needs to transmit, it waits for gaps in the sequence to transmit output; if two machines try to "talk" at the same time, and scramble the result, each waits a small, random interval and tries again.
[¶31.] Ethernet was more than a technical triumph, it was a complete break with the historical tradition of computer "centers." The assemblage of Ethernet-interconnected, independent, graphics-oriented workstations at Xerox had no center, and no central coordination other than the standardization of software and protocol. Even the system administrator had to log on as an individual user, with considerable powers of access but no authority over the individual stations. Ultimately, the system developed to the point where users could actually gain direct access to programs and files on each others' computers. Whether by design or by accident, the resulting network of equivalent users had permanently displaced the mainframe-derived hierarchy as a model for organizing interactive computing.23
[¶32.] In 1974, 90 percent of computer sales were mainframes. A decade later, just before the explosive growth of home computing began, minicomputers already had a 20 percent share, and microcomputers had 35 percent. At least part of that growth was the result of the new capacity for networking. "The 1980s marked the passage from an era of large, centralized computer installations to a world in which vast numbers of more inexpensive machines, distributed widely among users, are tied to one another and to a shrinking number of specialized, centralized computing resources."24 And as the networks grew from local to regional to national, and, eventually, to international, so did the possibilities--and the potential implications.
[¶34.] Nowhere are the alternatives of a networked future more thoroughly or imaginatively explored than in the pages of the journal Wired, an eclectic blend of technical detail, New Age spiritualism, and futuristic hype that has become the semiofficial voice of the libertarian wing of the computer culture. The common vision portrays a fully interconnected society where the networking of individuals has undermined the power of existing firms, institutions, and governments and destroyed or bypassed their instruments of hierarchy and central control, replacing them with more open and democratic global social communities that are self-organizing, self-regulating, and self-empowered.25 And the means for this freedom and empowerment is to be the Internet.
[¶35.] This point of view has been subject to a number of pointed critiques, most recently and notably those of Stoll and of Turkle,26 and I will not repeat them here. Nor do I directly address such issues as advertising and commercial use, censorship, and encryption, which are now being so noisily contested. Rather, I confine myself to the question of whether and how the Internet has contributed to or affected the processes of standardization and control described earlier.
[¶36.] Almost any user in the United States with a telephone line and a modem (and many other users at sites in Canada, Europe, and even the Far East) can now log on to a worldwide communications network almost as easily as making a telephone call, using a standardized protocol and a method of addressing that is as unique and unambiguous as a telephone number.27 Through the nearly global system of interconnection provided by the Internet, a number of remarkable services are provided, ranging from simple, international electronic mail to the ability to log on directly to remote computers and search or transfer files, to actual sharing of data and databases.
[¶37.] Even within the user community, the Internet has often been described as organized anarchy. Self-organized would perhaps be a better term. The Internet (which is distinguished from various other forms of internets by its capital "I") is an institution without a real organization, a network of networks, a formal entity with characteristics that are any moment described almost entirely by those of the units that band together to form it. Its existence is predicated entirely on the desire of its participants to perpetuate it; its standardization and coordination are driven entirely by the requirements to adhere to a common set of rules and protocols.28
[¶38.] The Internet came into being almost by accident.29 In 1983, ARPANET was split into two networks: ARPANET was retained as the name for the research network, while the military part with controlled access became MILNET. When NSFNET was created in the late 1980s, it was intended to retire the ARPANET entirely. Instead, however, it was replaced with a defense research internet, which fully took over its functions in 1989 when ARPANET was officially disbanded.30 Meanwhile, the set of interconnected networks using the TCP/IP standard, and the NSFNET design of chained communications links, had grown to the point where ARPANET was no longer needed as the backbone of the system, a function that has since been taken over by commercial networks.
[¶39.] Internet has grown exponentially. When it started there were about fifty connected networks. In 1981, there were about two thousand individual users and 213 host computers. By 1989, there were more than five hundred networks connected, and some tens of thousands of computers. By 1993, more than eleven thousand separate networks were being linked, interacting through more than 1.5 million host computers, and still growing.31 The number of individuals worldwide who now have access is difficult even to estimate, but it is certainly in the tens of millions.
[¶40.] Given the increasing power of computers, it was not long before even users on LANs or other closed systems using strictly local software and protocols were able to spare the extra computing power to implement protocol translation.32 The resulting additional layer of interface via specified "gateway" computers continues to provide increased access to the Internet every year through an incredible variety of means.33 Users without direct access to their own gateways can subscribe through electronic mail or bulletin boards at costs in the tens of dollars per month; small companies can pay larger fees to link distant offices or services; larger companies can even find very high speed services whose monthly costs may amount to many thousands of dollars.34
[¶41.] The ability of users to educate, to communicate, to share, and to find extended networks of other people with the same interests and opinions was and remains unprecedented, even if the quality of that interaction is debatable. The emergence of the World Wide Web, a set of hypertext Internet sites with standardized protocols and a common hypertext markup language, imposed a de facto standardization. With that development, and the emergence of standardized interfaces such as Mosaic and Netscape, the Internet moved toward becoming a commercial service as well as a personal one, and a formal source of information instead of an informal one. Many libraries, newspapers, and even companies now provide access via Web pages. For a few, it is already the primary means of supplying information to customers (other than by very slow physical mail--commonly known to net users as snail mail).
[¶42.] The protocols required for global connectivity, particularly those with graphic interfaces and multimedia mixes of text, sound, and pictures, place even greater demands on hardware and software than did LANs and limited networks, and require more standardization and compliance. As the global network becomes more functional for business, and more commonly used as a database and source of commercial as well as informal information, it will further extend the range and scope of forces compelling convergence of equipment and software, and, given present experience, it will add another layer of pressure to the drive for faster, more powerful machines.35
[¶44.] The original networking of users to single, large centralized computers required extensive hardwiring, strictly limiting flexibility and the ability to work at home, or even at remote sites. The equally demanding requirements for maintenance and standardization resulted in centralized control by system operators. The introduction of the minicomputer did not much relax this condition. Although the minis were themselves interconnected, the individual users were really not. What appeared to be peer-to-peer communication was actually user-to-mini-to-mini-to-user, and still subject to considerable central control by the system operator(s).
[¶45.] This was fine in some circumstances, and still is. But in the 1980s, most of the growth in business as well as personal computing was in the direction of stand-alone machines, open systems, and user control over software. This did not sit well with managers in many organizational settings.36 In addition to the isolation of workers, "sneaker-net," the carrying of floppy disks back and forth between machines, was a system of limited efficiency and questionable reliability. Although connectivity through the mainframe had been clumsy and inefficient, the central system had the advantage of enabling electronic communication and data flows between and among users. A generation of managers had been brought up on the idea that the information revolution was a means for restoring managerial control.37 The fragmentation of machines and software left no mechanism for managers to improve the coordination and integration of their work.
[¶46.] The first wave of office interconnection was almost as anarchic as the first wave of computerization. Companies, and even individual units within them, were free to assemble a network in any way they liked. Over time, many competitive systems emerged. Some were basically intercommunication packages restricted to specific types of machines, others were actually operating systems of their own, capable of using any one of a variety of interconnection hardware. In some businesses and offices, there were several interpenetrating, sometimes overlapping systems that were only marginally compatible with each other. Even an organization with nothing but standardized PCs might end up with a heterogeneous array of LANs that were only partially compatible and rarely interoperable.38 Connecting from one organization, business, or unit to another became a major headache. The only real standard that emerged was the choice of Ethernet as a de facto wiring standard.39
[¶47.] The proliferation of desktop computers and local networks threatened to decentralize office tasks and office structure as well. As the work done over networks became more central to the organization's function and purpose, there were increasing demands to integrate the LANs and bring their administration under central authority. Moreover, the increasing capabilities of both workstations and network hardware made it possible to interlink the LANs within any organization, assembling them into a single, larger network. Over time, the two historical approaches converged. As central computers grew more powerful, the effects of system load diminished. As LANs grew more powerful, and more complicated, they increasingly required the presence of a full-time management information system (MIS) staff to keep them running efficiently. As office integration continued, many of the LANs were assembled into larger nets managed by a central mainframe or minicomputer.
[¶48.] Over time, the network began to become the central functional element in the office or business, operating central software and communicating various pieces of critical information to and from databases, data management systems, and other programs of major importance. Given the original history of information technology as a method for integration and control, it is therefore not surprising that the choice of information technologies, including communications and design as well as individual workstations and their software, became tightly meshed with the design of the networks, and the whole package became a design choice and not just an enabler for organizational performance and change.40
[¶49.] Almost independent of size, scope, and purpose, one organization after another moved to centralize information systems purchases and establish organizational or corporate standards, and to replace ad hoc and disorganized management of idiosyncratic LANs with network management under the control of management, usually through a dedicated MIS staff not that different in their power and approach from that of the traditional centralized computer center. The paradox was nicely put in an article in Infoworld: "In order to construct the flexible, interconnected systems business demands, a fair amount of centralized control must be exercised."41 Freedom and empowerment, it seems, are available only at the price of increased technical and user conformity with specifications, protocols, and software established by those who create and implement the net.
[¶50.] Most recently, large firms have been adopting a new language, "intra-net," to describe a networking system that is either entirely internal, or connected to the global Internet only through specified gateways, and with traffic that is not only controlled, but often carefully monitored. In addition, many firms that do continue to provide Internet access are now being offered special software that monitors the net access time and usage of every user, and may be expanded to scan messages as well.42 Were workers ever under more control than this?
[¶52.] In its early days, the automobile was also considered by many as having the potential to foster a "revolution" in American society.43 Like the computer, the automobile continues to nurture the mythic dimensions of autonomy and personal freedom. But that freedom lives mostly in the world of advertising, where vehicles are pictured on mountaintops instead of in traffic jams. What is not advertised are the costs and demands of the network of roads, highways, and the other elements needed to make the automobile useful. It is through the network and not the vehicle that the automobile has irreversibly transformed modern societies. And if the future dream of an intelligent highway is realized, user independence and autonomy will be further sacrificed for the sake of efficiency, mediated and ruled by yet another computerized network.44
[¶53.] Where computerization differs is that the computer, whatever its iconic status, is fundamentally a constitutive and not a productive element, a tool and not an end product. It defies comparison with almost any previous technical artifact, even the telephone or the automobile, not just because of the uniqueness of its capabilities (and limitations), but because the process of computerization embeds them doubly in the structure of modern societies. As structural elements, computers determine the shape and the function of the computerized networks that are increasingly displacing traditional means for the provision of necessary social and business services. And the networks themselves are becoming deeply embedded as essential elements of the organizations and institutions that now perform them.
[¶54.] As is the case with library information specialists, MIS specialists and others in charge of developing the new capabilities and exploring the means and techniques for developing new means and modes of control must be thoroughly familiar with the rapid pace of technical development that characterizes the field. They are among the most avid and motivated readers of the vast body of magazines, special newsletters (many dedicated solely to information technology managers and MIS specialists), and other literature that emanates from the computer design community, and the most likely to attend conventions and demonstrations of the latest hardware and the latest variants of software. Even when they specifically promote the need to involve end users in decisions, it is, with very few exceptions, done from the perspective and context of design, and not of use.45
[¶55.] The return of managerial control in a new, decentralized guise via the design and standardization of networks therefore suggests an organizational explanation for the shift in the balance of authority and power back to the designers after the brief era of end-user liberalism that both complements and reinforces other explanations based on more general social or psychological phenomena. MIS specialists and managers in charge of information systems for organizations, ranging from the largest corporations or government agencies to the smallest business, were hired not just to manage and maintain existing systems, but to help design and install new ones to increase organizational performance and efficiency--in other words, to act also as designers.
[¶56.] What gives this meaning and relevance are the parallels between the effects that the first wave of computerization--the automation of tools--had on the industrial work force and those that are taking place through the second wave--the networking of information systems--in the service and business sectors. In its initial phases, the microcomputer revolution seemed to offer the possibility for a social reformation of the workplace, restoring to users and workers some of the power and autonomy that had been lost with the decline of craft work and craft skills in the era of mass production and standardized office work. At first, owners and managers were prepared to accept independence and some degree of anarchic self-organization as the price of capturing the promised returns in productivity and efficiency. But control of information technology remained as central to the purposes and goals of management as ever,46 and it was only a matter of time and technical development before means were found to reassert it. The goals, the intentions, and the consequences are not unique to computers or computerization, but defined and shaped by the historical evolution of scientific management in the twentieth century, as discussed in the following chapter.
NOTES:
1 Looking for the three-letter combination "art," for instance, brought up every word that had an art in it: artifact; cuisinart; smart; cart; fart; and so on.
2 Strangely enough, this is to some extent true even for users who for other reasons preferred the Macintosh because it was simpler to use and otherwise more user-friendly.
3 Baker, "Discards."
4 To put a final twist on this, the last two people we hired as computer system operator in my institute were students in the "library" school. We used to get them from computer science. Since that time, the School of Library and Information Science has become the School of Information and Managegement Science; most of the book-librarians are gone.
5 See, for example, Baker, "Discards." An excellent review of the automation of Berkeley's library and the removal of the card catalogue may also be found in Stoll, Silicon Snake Oil, 193-202. The new San Francisco library has since joined the list of those who have used the former catalog cards as a decorative motif.
6 Baker, "Discards."
7 Reports of searches that either end up totally missing the item sought or return hundreds of entries are not totally exaggerated (nor are cases where a mistake in data entry, such as misspelling a name, makes finding the item nearly impossible). The other day I logged on in search of a set of hearings on organized crime in Chicago. After nearly half an hour of frustrating attempts at searching the database, I just gave up. But I couldn't believe the category, or the hearings, did not exist. So I went to the card catalog (which still survives in our Government Documents collection) and found them almost instantly, listed, as it turns out, under the subject heading of "Organized Crime, Illinois, Chicago." It seems that our search program could not manage to skip the Illinois when scanning the database.
8 See, for example, Wurman, Information Anxiety; Strassman, Information Payoff.
9 That is, if the databases were fixed. Unfortunately, the amount of information that is available seems to be growing even faster, paralleling the famous problem of trying to count the human race by parading them past a single point. According to Honan ("At the National Archives"), the U.S. National Archives are already receiving about one million megabytes per year.
10 Beniger, Control Revolution; Chandler, Jr., Visible Hand; Scott Morton, ed., Corporation of the 1990s.
11 Flamm, Targetting the Computer. Also see the excellent discussion on DARPA's origins and goals in Edwards, Closed World. Originally called the Advanced Research Projects Agency (ARPA), the D for Defense was added to its name in 1972. President Kennedy gave it a wide mandate to pursue research that made it (by design) more independent than most military-related research organizations.
12 Flamm, Targetting the Computer, 57ff.
13 In an ordinary mainframe, or mini, of the time, jobs had to be entered and executed sequentially. Physically, this could mean a long wait even for a small task; socially, it meant that the computer was an external rather than an interactive tool. In timesharing, computing time is cut up into slices that are allocated to users by some internal rule. Each connected user can operate as if they had a (sometimes much slower) machine that was their own.
14 The initial impetus appears to have come from President Johnson, who was worried about the confusion and lack of coordination among the computer systems used by the three armed services. By 1969, ARPANET had three nodes up, one for each service. Today it has thousands. But the Government Accounting Office still issues several reports a year criticizing the confusion and lack of coordination among the computer systems used by the three armed services. Some problems cannot be solved with technology.
15 Krol, Whole Internet, 11. As it became increasingly realized that much of the threat to existing telecommunications came not from physical destruction but from the ravages of electromagnetic pulses, DARPA and other governments increased their funding for other technical solutions and more robust means of communications. Among other things, this led to rapid improvements in fiber-optics technology, which then led back to a further transformation of computer networking capabilities.
16 Roberts, "The ARPANET."
17 Flamm, Targetting the Computer, 59-60. Protocols are the rules by which the packets are constructed, assembled, and sent. They may include such information as source, destination, packet length, bit-code, and other information designed to monitor and detect the integrity of the packet to check for transmission errors.
18 Krol, Whole Internet, 21ff. The TCP is particularly important for extended, decentralized national networks, since the idea of multiple paths for transmission means that the packets may not be received in the same sequence in which they were sent. TCP will recognize this and wait until the whole sequence has arrived before declaring the message to be complete.
19 Flamm, Targetting the Computer, 61.
20 In order to organize the users coherently so that they do not interfere with each other, its designers came up with was the tree-like hierarchy of directories and subdirectories that PC users now take almost completely for granted, with the additional constraint of restricted access between individual branches.
21 At least that is how it appears to the user. In many advanced operating systems, the "copying" or "moving" of files takes place by changing the address or permission settings (pointers) rather than by physically copying the file bit-by-bit from one location to another.
22 The excitement of that period, and the eagerness with which graduate students in particular spread the gospel of UNIX and electronic mail, is hard to recapture. As the saying goes, "You would have had to have been there at the time."
23 Recognizing the achievement, the Association for Computing Machinery awarded three Xerox PARC researchers the 1984 Software Systems Award for the invention of distributed personal computing. By that time, the Alto was history, and the PARC project team broken up.
24 Flamm, Creating the Computer, 239.
25 See, for an excellent example, Negroponte, Being Digital.
26 Stoll, Silicon Snake Oil; Turkle, Life on the Screen.
27 Krol, Whole Internet, contains a wealth of information about Internet access and rules, especially for independent users.
28 The best definition is perhaps the general one of Quarterman: "The Internet is an internetwork of many networks all running the TCP/IP protocol suite, connected through gateways, and sharing common name and address spaces. It exists to facilitate sharing of resources at participating organizations and collaboration among researchers, as well as to provide a testbed for new developments in networking." Quarterman, The Matrix, 278.
29 The name, which is a formalization of a technical description, appears to be more a matter of history than intent; the original ARPANET was sometimes referred to as the ARPA internet (because it was an association, or intertie, between and among other networks).
30 As pointed out by Denning, "ARPANET After 20 Years," the Research Internet had taken over so many of ARPANET's functions at the time of its dissolution that many users never even noticed.
31 The 1993 figures are from Eckhouse, "INTERNET." John Markoff had estimated the number at ten million some four months earlier ("Turning the Desktop PC").
32 This is somewhat easier for so-called "client-server" LANs using a central, dedicated microcomputer to handle LAN traffic and common programs than for a peer-to-peer network such as the PARC design for Ethernet. On the other hand, client-server networks are increasingly more popular, particularly in medium-sized laboratories or businesses.
33 Are they really part of the Internet or just attachments? The response of some is that because the Internet is so self-organized, self-designed, and self-regulated, they are if they want to be. See, e.g., Krol, Whole Internet, 13. Krol's metaphor for the Internet is that of a (very Protestant) church, a sort of free association that is entirely voluntary, but very demanding of beliefs and compliance if you do join.
34 Markoff, "A Web of Networks."
35 According to a recent story (Cavender, "Making the Web Work for All"), the goals of a new consortium formed to promote the use of the World Wide Web and make it "more democratic," are to set better and stricter standards, including protocols, specifications, and software. According to Tim Berners-Lee, who first invented the Web, and now directs and coordinates the new consortium that is trying to bring some order and standards to it, what is needed is more order, structure, and harmony. IBM, Microsoft, AT&T, and Sony are among the first corporate members.
36 Rockart and Short, "The Networked Organization."
37 This is the central argument of Beniger's splendid work of history and analysis, Control Revolution.
38 Korzeniowski ("The IS Tug-of-War") uses as an example Florida Power and Light, which at one time had more than 3,000 users running a "hodgepodge" of LANs with a variety of incompatible network protocols that included Novell, DEC, IBM SNA, and TCP/IP.
39 Although IBM tried to promote its own system, Token Ring, users found Ethernet to be faster and more reliable.
40 Rockart and Short, "Networked Organization," 213.
41 Korzeniowski, "The IS Tug-of-War."
42 Abate, "Someone Could Be Watching."
43 I apologize to those who consider this analogy to be hoary and overworked, but I think that in this case it is quite appropriate. See, for example, Flink, Automobile Age.
44 See, for example, Juhlin, "Information Technology."
45 Wieckert, "Design Under Uncertainty"; Suchman, "Working Relations."
46 See, for example, Scott Morton, Corporation of the 1990s.
From Anarchy to Networks
DARPA Builds a Net
Decentralized Centralization: UNIX
Ring around the Laboratory
Internet
The Interconnected Office
Conclusion