They would later call it a Golden Age of hacking, this marvelous existence on the ninth floor of Tech Square. Spending their time in the drab machine room and the cluttered offices nearby, gathered closely around terminals where rows and rows of green characters of code would scroll past them, marking up printouts with pencils retrieved from shirt pockets, and chatting in their peculiar jargon over this infinite loop or that losing subroutine, the cluster of technological monks who populated the lab was as close to paradise as they would ever be. A benevolently anarchistic lifestyle dedicated to productivity and PDP-6 passion. Art, science, and play had merged into the magical activity of programming, with every hacker an omnipotent master of the flow of information within the machine. The debugged life in all its glory.
But as much as the hackers attempted to live the hacker dream without interference from the pathetically warped systems of the “real world,” it could not be done. Greenblatt and Knight’s failure to convince outsiders of the natural superiority of the Incompatible Time-sharing System was only one indication that the total immersion of a small group of people into hackerism might not bring about change on the massive scale that all the hackers assumed was inevitable. It was true that, in the decade since the TX-0 was first delivered to MIT, the general public and certainly the other students on campus had become more aware of computers in general. But they did not regard computers with the same respect and fascination as did the hackers. And they did not necessarily regard the hackers’ intentions as benign and idealistic.
On the contrary, many young people in the late 1960s saw computers as something evil, part of a technological conspiracy where the rich and powerful used the computer’s might against the poor and powerless. This attitude was not limited to students protesting, among other things, the now exploding Vietnam War (a conflict fought in part by American computers). The machines which stood at the soul of hackerism were also loathed by millions of common, patriotic citizens who saw computers as a dehumanizing factor in society. Every time an inaccurate bill arrived at a home, and the recipient’s attempts to set it right wound up in a frustrating round of calls—usually leading to an explanation that “the computer did it,” and only herculean human effort could erase the digital blot—the popular contempt toward computers grew. Hackers, of course, attributed those slipups to the braindamaged, bureaucratic, batch-processed mentality of IBM. Didn’t people understand that the Hacker Ethic would eliminate those abuses by encouraging people to fix bugs like thousand-dollar electric bills? But in the public mind there was no distinction between the programmers of Hulking Giants and the AI lab denizens of the sleek, interactive PDP-6. And in that public mind all computer programmers, hackers or not, were seen either as wildhaired mad scientists plotting the destruction of the world or as pasty-skinned, glassy-eyed automatons, repeating wooden phrases in dull monotones while planning the next foray into technological big-brotherism.
Most hackers chose not to dwell on those impressions. But in 1968 and 1969, the hackers had to face their sad public images, like it or not.
A protest march that climaxed at Tech Square dramatically indicated how distant the hackers were from their peers. Many of the hackers were sympathetic to the antiwar cause. Greenblatt, for instance, had gone to a march in New Haven, and had done some phone line hookups for antiwar radicals at the National Strike Information Center at Brandeis. And hacker Brian Harvey was very active in organizing demonstrations; he would come back and tell in what low esteem the AI lab was held by the protesters.
There was even some talk at antiwar meetings that some of the computers at Tech Square were used to help run the war. Harvey would try to tell them it wasn’t so, but the radicals would not only disbelieve him but get angry that he’d try to feed them bullshit.
The hackers shook their heads when they heard of that unfortunate misunderstanding. One more example of how people didn’t understand! But one charge leveled at the AI lab by the antiwar movement was entirely accurate: all the lab’s activities, even the most zany or anarchistic manifestations of the Hacker Ethic, had been funded by the Department of Defense. Everything, from the Incompatible Time-sharing System to Peter Samson’s subway hack, was paid for by the same Department of Defense that was killing Vietnamese and drafting American boys to die overseas.
The general AI lab response to that charge was that the Defense Department’s Advanced Research Projects Agency (ARPA), which funded the lab, never asked anyone to come up with specific military applications for the computer research engaged in by hackers and planners. ARPA had been run by computer scientists; its goal had been the advancement of pure research. During the late 1960s a planner named Robert Taylor was in charge of ARPA funding, and he later admitted to diverting funds from military, “missionoriented” projects to projects that would advance pure computer science. It was only the rarest hacker who called the ARPA funding “dirty money.”
Almost everyone else, even people who opposed the war, recognized that ARPA money was the lifeblood of the hacking way of life. When someone pointed out the obvious—that the Defense Department might not have asked for specific military applications for the Artificial Intelligence and systems work being done, but still expected a bonanza of military applications to come from the work (who was to say that all that “interesting” work in vision and robotics would not result in more efficient bombing raids?)—the hackers would either deny the obvious (Greenblatt: “Though our money was coming from the Department of Defense, it was not military”) or talk like Marvin Minsky: “There’s nothing illegal about a Defense Department funding research. It’s certainly better than a Commerce Department or Education Department funding research...because that would lead to thought control. I would much rather have the military in charge of that . . . themilitary people make no bones about what they want, so we’re not under any subtle pressures. It’s clear what’s going on. The case of ARPA was unique, because they felt that what this country needed was people good in defense technology. In case we ever needed it, we’d have it.”
Planners thought they were advancing true science. Hackers were blithely formulating their tidy, new-age philosophy based on free flow of information, decentralization, and computer democracy. But the antimilitary protesters thought it was a sham, since all that so-called idealism would ultimately benefit the War Machine that was the Defense Department. The antiwar people wanted to show their displeasure, and the word filtered up to the Artificial Intelligence lab one day that the protesters were planning a march ending with a rally right there on the ninth floor. There, protesters would gather to vividly demonstrate that all of them— hackers, planners, and users—were puppets of the Defense Department.
Russ Noftsker, the nuts-and-bolts administrator of the AI lab, took the threat of protesters very seriously. These were the days of the Weather Underground, and he feared that wild-eyed radicals were planning to actually blow up the computer. He felt compelled to take certain measures to protect the lab.
Some of the measures were so secretive—perhaps involving government agencies like the CIA, which had an office in Tech Square—that Noftsker would not reveal them, even a decade after the war had ended. But other measures were uncomfortably obvious. He removed the glass on the doors leading from the elevator foyer on the ninth floor to the area where the hackers played with computers. In place of the glass, Noftsker installed steel plates, covering the plates with wood so it would not look as if the area were as barricaded as it actually was. The glass panels beside the door were replaced with half-inch-thick bulletproof Plexiglas so you could see who was petitioning for entry before you unlocked the locks and removed the bolts. Noftsker also made sure the doors had heavy-duty hinges bolted to the walls, so that the protesters would not try to remove the entire door, rush in, and storm the computers.
During the days preceding the demonstration, only people whose names were on an approved list were officially allowed entry to this locked fortress. On the day of the demonstration, he even went so far as to distribute around forty Instamatic cameras to various people, asking them to take pictures of the demonstrators when they ventured outside the protected area. If the demonstrators chose to become violent, at least there would be documentation of the wrongdoers.
The barricades worked insofar as the protesters—around twenty or thirty of them, in Noftsker’s estimate—walked to Tech Square, stayed outside the lab a bit, and left without leveling the PDP-6 with sledgehammers. But the collective sigh of relief on the part of the hackers must have been mixed with much regret. While they had created a lock-less, democratic system within the lab, the hackers were so alienated from the outside world that they had to use those same hated locks, barricades, and bureaucrat-compiled lists to control access to this idealistic environment. While some might have groused at the presence of the locks, the usual free access guerrilla fervor did not seem to be applied in this case. Some of the hackers, shaken at the possibility of a rout, even rigged the elevator system so that the elevators could not go directly to the ninth floor. Though previously some of the hackers had declared, “I will not work in a place that has locks,” after the demonstrations were over, and after the restricted lists were long gone, the locks remained. Generally, the hackers chose not to view the locks as symbols of how far removed they were from the mainstream.
A very determined solipsism reigned on the ninth floor, a solipsism that stood its ground even when hackerism suffered some direct, though certainly less physically threatening, attacks in publications and journals. It was tough to ignore, however, the most vicious of these, since it came from within MIT, from a professor of Computer Science (yes, MIT had come around and started a department) named Joseph Weizenbaum. A former programmer himself, a thin, mustachioed man who spoke with a rolling Eastern European accent, Weizenbaum had been at MIT since 1963, but had rarely interacted with the hackers. His biggest programming contribution to AI had been a program called ELIZA, which carried on a conversation with the user; the computer would take the role of a therapist. Weizenbaum recognized the computer’s power, and was disturbed to note how seriously users would interact with ELIZA. Even though people knew it was “only” a computer program, they would tell it their most personal secrets. To Weizenbaum, it was a demonstration of how the computer’s power could lead to irrational, almost addictive behavior, with dehumanizing consequences. And Weizenbaum thought that hackers—or “compulsive programmers”—were the ultimate in computer dehumanization. In what was to become a notorious passage, he wrote, in Computer Power and Human Reason:
. . . bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers, already poised to strike, at the buttons and keys on which their attention seems to be riveted as a gambler’s on the rolling dice. When not so transfixed, they often sit at tables strewn with computer printouts over which they pore like possessed students of a cabbalistic text. They work until they nearly drop, twenty, thirty hours at a time. Their food, if they arrange it, is brought to them: coffee, Cokes, sandwiches. If possible, they sleep on cots near the printouts. Their rumpled clothes, their unwashed and unshaven faces, and their uncombed hair all testify that they are oblivious to their bodies and to the world in which they move. These are computer bums, compulsive programmers . . .
Weizenbaum would later say that the vividness of this description came from his own experience as a hacker of sorts, and was not directly based on observations of the ninth-floor culture. But many hackers felt otherwise. Several thought that Weizenbaum had identified them personally, even invaded their privacy in his description. Some others guessed that Greenblatt had been unfairly singled out; indeed, Greenblatt did send Weizenbaum some messages objecting to the screed.
Still, there was no general introspection resulting from this or any other attack on the hacker life-style. That was not the way of the lab. Hackers would not generally delve into each other’s psychological makeups. “There was a set of shared goals”—Tom Knight would later explain—“a set of shared intellectual excitement, even to a large degree a set of shared social life, but there was also a boundary which people were nervous to go beyond.”
It was this unspoken boundary that came to bother hacker David Silver. He joined the lab as an adolescent and literally came to maturity there, and besides his productive hacking he spent time thinking about the relationship between hackers and computers.
He came to be fascinated at how all of them got so attached to, so intimately connected with something as simple as the PDP-6. It was almost terrifying: thinking about this made David Silver wonder what it was that connected people together, how people found each other, why people got along...when something relatively simple like the PDP-6 drew the hackers so close. The whole subject made him wonder on the one hand whether people were just fancy kinds of computers or on the other hand whether they were images of God as a spirit.
These introspections were not things he necessarily shared with his mentors, like Greenblatt or Gosper. “I don’t think people had sort of warm conversations with each other,” he would later say. “That wasn’t the focus. The focus was on sheer brainpower.” This was the case even with Gosper: Silver’s apprenticeship with him was not so much a warm human relationship, he’d later reflect, as “a hacker relationship,” very close in terms of what they shared in terms of the computer, but not imbued with the richness of a realworld friendship.
“There were many, many, many years that went by when all I did was hack computers, and I didn’t feel lonely, like I was missing anything,” Silver would say. “But I guess as I started to grow up more, round out more, change more, become less eccentric in certain ways, I started needing more input from people. [By not going to high school] I bypassed all that social stuff and went right into this blue-sky think tank...Ispent my lifetime walking around talking like a robot, talking to a bunch of other robots.”
Sometimes the hacker failure to be deeply personal had grim consequences. The lab might have been the ideal location for gurulevel hackers, but for some the pressure was too much. Even the physical layout of the place promoted a certain high-tension feeling, with the open terminals, the constant intimidating presence of the greatest computer programmers in the world, the cold air and the endless hum of the air conditioners. At one point a research firm was called in to do a study of the excessive, inescapable noise, and they concluded that the hum of the air conditioner was so bothersome because there weren’t enough competing noises—so they fixed the machines to make them give off a loud, continual hiss. In Greenblatt’s words, this change “was not a win,” and the constant hiss made the long hours on the ninth floor rather nerve-racking for some. Add that to other factors— lack of sleep, missed meals to the point of malnutrition, and a driving passion to finish that hack—and it was clear why some hackers went straight over the edge.
Greenblatt was best at spotting “the classical syndrome of various kinds of losses,” as he called it. “In a certain way, I was concerned about the fact that we couldn’t have people dropping dead all over the place.” Greenblatt would sometimes tell people to go home for a while, take it easy. Other things were beyond him. For instance, drugs. One night, while driving back from a Chinese meal, a young hacker turned to him and asked, not kidding, if he wanted to “shoot up.” Greenblatt was flabbergasted. The real world was penetrating again, and there was little Greenblatt could do. One night not long afterward, that particular hacker leapt off the Harvard bridge into the ice-covered Charles river and was severely injured. It was not the only suicide attempt by an AI lab hacker.
From that evidence alone, it would seem that Weizenbaum’s point was well taken. But there was much more to it than that. Weizenbaum did not acknowledge the beauty of the hacker devotion itself . . . or thevery idealism of the Hacker Ethic. He had not seen, as Ed Fredkin had, Stew Nelson composing code on the TECO editor while Greenblatt and Gosper watched: without any of the three saying a word, Nelson was entertaining the others, encoding assembly-language tricks which to them, with their absolute mastery of that PDP-6 “language,” had the same effect as hilariously incisive jokes. And after every few instructions there would be another punch line in this sublime form of communication . . . The scene was a demonstration of sharing which Fredkin never forgot.
While conceding that hacker relationships were unusual, especially in that most hackers lived asexual lives, Fredkin would later say that “they were living the future of computers . . . They just had fun. They knew they were elite, something special. And I think they appreciated each other. They were all different, but each knew something great about the other. They all respected each other. I don’t know if anything like [that hacker culture] has happened in the world. I would say they kind of loved each other.”
The hackers focused on the magic of computers instead of human emotions, but they also could be touched by other people. A prime example would be the case of Louis Merton (a pseudonym). Merton was an MIT student, somewhat reserved, and an exceptional chess player. Save for the last trait, Greenblatt at first thought him well within the spectrum of random people who might wander into the lab.
The fact that Merton was such a good chess player pleased Greenblatt, who was then working to build an actual computer which would run a souped-up version of his chess program. Merton learned some programming, and joined Greenblatt on the project. He later did his own chess program on a little-used PDP-7 on the ninth floor. Merton was enthusiastic about chess and computers, and there was little to foreshadow what happened during the Thanksgiving break in late 1966, when, in the little theater-like AI “playroom” on Tech Square’s eighth floor (where Professor Seymour Papert and a group were working on the educational LOGO computer language), Merton temporarily turned into a vegetable. He assumed a classic position of catatonia, rigidly sitting upright, hands clenched into fists at his side. He would not respond to questions, would not even acknowledge the existence of anything outside himself. People didn’t know what to do. They called up the MIT infirmary and were told to call the Cambridge police, who carted poor Merton away. The incident severely shook the hackers, including Greenblatt, who found out about it when he returned from a holiday visit home.
Merton was not one of the premier hackers. Greenblatt was not an intimate friend. Nonetheless, Greenblatt immediately drove out to Westboro State Hospital to recover Merton. It was a long drive, and the destination reminded Greenblatt of something out of the Middle Ages. Less a hospital than a prison. Greenblatt became determined not to leave until he got Merton out. The last step in this tortuous process was getting the signature of an elderly, apparently senile doctor. “Exactly [like something] out of a horror film,” Greenblatt later recalled. “He was unable to read. This random attendant type would say, ‘Sign here. Sign here.’”
It turned out that Merton had a history of these problems. Unlike most catatonics, Merton would improve after a few days, especially when he was given medicine. Often, when he went catatonic somewhere, whoever found him would call someone to take him away, and the doctors would give a diagnosis of permanent catatonia even as Merton was coming to life again. He would call up the AI lab and say. “Help,” and someone, often Greenblatt, would come and get him.
Later, someone discovered in MIT records a letter from Merton’s late mother. The letter explained that Louis was a strange boy, and he sometimes would go stiff. In that case, all you needed to do was to ask, “Louis, would you like to play a game of chess?” Fredkin, who had also taken all interest in Merton, tried this. Merton one day stiffened on the edge of his chair, totally in sculpture mode. Fredkin asked him if he’d like to play chess, and Merton stiffly marched over to the chess board. The game got under way with Fredkin chatting away in a rather one-sided conversation, but suddenly Merton just stopped. Fredkin asked, “Louis; why don’t you move?” After a very long pause, Merton responded in a guttural, slow voice, “Your...king’s . . . in . . . check.” Fredkin had inadvertently uncovered the check from his last move.
Merton’s condition could be mitigated by a certain medicine, but for reasons of his own he almost never took it. Greenblatt would plead with him, but he’d refuse. Once Greenblatt went to Fredkin to ask him to help out; Fredkin went back with Greenblatt to find Merton stiff and unresponsive.
“Louis, how come you’re not taking your medicine?” he asked. Merton just sat there, a weak smile frozen on his face. “Why won’t you take it?” Fredkin repeated.
Suddenly, Merton reared back and walloped Fredkin on the chin. That kind of behavior was one of Merton’s unfortunate features. But the hackers showed remarkable tolerance. They did not dismiss him as a loser. Fredkin considered Merton’s case a good example of the essential humanity of the group which Weizenbaum had, in effect, dismissed as emotionless androids. “He’s just crazy,” Minsky would later say of Weizenbaum. “These [hackers] are the most sensitive, honorable people that have ever lived.” Hyperbole, perhaps, but it was true that behind their singlemindedness there was warmth, in the collective realization of the Hacker Ethic. As much as any devout religious order, the hackers had sacrificed what outsiders would consider basic emotional behavior—for the love of hacking.
David Silver, who would eventually leave the order, was still in awe of that beautiful sacrifice years later: “It was sort of necessary for these people to be extremely brilliant and in some sense, handicapped socially so that they would just kind of concentrate on this one thing.” Hacking. The most important thing in the world to them.
• • • • • • • •
The computer world outside Cambridge did not stand still while the Hacker Ethic flourished on the ninth floor of Tech Square. By the late 1960s, hackerism was spreading, partly because of the proliferation of interactive machines like the PDP-10 or the XDS940, partly because of friendly programming environments (such as the one hackers had created at MIT), and partly because MIT veterans would leave the lab and carry their culture to new places. But the heart of the movement was this: people who wanted to hack were finding computers to hack on.
These computers were not necessarily at MIT. Centers of hacker culture were growing at various institutions around the country, from Stanford to Carnegie-Mellon. And as these other centers reached critical mass—enough dedicated people to hack a large system and go on nightly pilgrimages to local Chinese restaurants—they became tempting enough to lure some of the AI lab hackers away from Tech Square. The intense MIT style of hackerism would be exported through these emissaries.
Sometimes it would not be an institution that hackers moved to, but a business. A programmer named Mike Levitt began a leading-edge technology firm called Systems Concepts in San Francisco. He was smart enough to recruit phone-and-PDP-1 hacker Stew Nelson as a partner; TX-0 music master Peter Samson also joined this hightech hardware design-and-manufacture business. All in all, the small company managed to get a lot of the concentrated talent around Tech Square out to San Francisco. This was no small feat, since hackers were generally opposed to the requirements of California life, particularly driving and recreational exposure to the sun. But Nelson had learned his lesson earlier—despite Fredkin’s repeated urgings in the mid-sixties, he’d refused to go to Triple-I’s new Los Angeles headquarters until, one day, after emphatically reiterating his vow, he stormed out of Tech Square without a coat. It happened to be the coldest day of the Cambridge winter that year, and as soon as he walked outside his glasses cracked from the sudden change of temperature. He walked straight back to Fredkin’s office, his eyebrows covered with icicles, and said, “I’m going to Los Angeles.”
In some cases, a hacker’s departure would be hastened by what Minsky and Ed Fredkin called “social engineering.” Sometimes the planners would find a hacker getting into a rut, perhaps stuck on some systems problem, or maybe becoming so fixated on extracurricular activities, like lock hacking or phone hacking, that planners deemed his work no longer “interesting.” Fredkin would later recall that hackers could get into a certain state where they were “like anchors dragging the thing down. Time had gone by them, in some sense. They needed to get out of the lab and the lab needed them out. So some surprising offer would come to those persons, or some visit arranged, usually someplace far, far away. These people started filtering out in the world to companies or other labs. It wasn’t fate—I would arrange it.”
Minsky would say, “Brave Fredkin,” acknowledging the clandestine nature of Fredkin’s activity, which would have to be done without the knowledge of the hacker community; they would not tolerate an organizational structure that actually dictated where people should go.
While the destination could be industry—besides Systems Concepts, Fredkin’s Information International company hired many of the MIT hackers—it was often another computer center. The most desirable of these was the Stanford AI Lab (SAIL), which Uncle John McCarthy had founded when he left MIT in 1962.
In many respects SAIL was a mirror image of MIT’s operation, distorted only by the California haze that would sometimes drift from the Pacific Ocean to the peninsula. But the California distortion was a significant one, demonstrating how even the closest thing to the MIT hacker community was only an approximation of the ideal; the hothouse MIT style of hackerism was destined to travel, but when exposed to things like California sunlight it faded a bit in intensity.
The difference began with the setting, a semicircular concreteglass-and-redwood former conference center in the hills overlooking the Stanford campus. Inside the building, hackers would work at any of sixty-four terminals scattered around the various offices. None of the claustrophobia of Tech Square. No elevators, no deafening air conditioning hiss. The laid-back style meant that much of MIT’s sometimes constructive acrimony—the shouting sessions at the TMRC classroom, the religious wars between grad students and hackers—did not carry over. Instead of the battlestrewn imagery of shoot-’em-up space science fiction that pervaded Tech Square, the Stanford imagery was the gentle lore of elves, hobbits, and wizards described in J.R.R. Tolkien’s Middle Earth trilogy. Rooms in the AI lab were named after Middle Earth locations, and the SAIL printer was rigged so it could handle three different Elven type fonts.
The California difference was reflected in the famous genre of computer games that the Stanford lab eventually developed after the heyday of MIT’s Spacewar. A Stanford hacker named Donald Woods discovered a kind of game on a Xerox research computer one day that involved a spelunker-explorer seeking treasure in a dungeon. Woods contacted the programmer, Will Crowther, talked to him about it, and decided to expand Crowther’s game into a fullscale Adventure, where a person could use the computer to assume the role of a traveler in a Tolkienesque setting, fight off enemies, overcome obstacles through clever tricks, and eventually recover treasure. The player would give two-word, verb-noun commands to the program, which would respond depending on how the command changed the universe that had been created inside the computer by Don Woods’ imagination. For instance, the game began with the computer describing your opening location:
YOU ARE STANDING AT THE END OF A ROAD BEFORE A SMALL BRICK BUILDING. AROUND YOU IS A FOREST. A SMALL STREAM FLOWS OUT OF THE BUILDING AND DOWN A GULLY.
If you wrote GO SOUTH, the computer would say:
YOU ARE IN A VALLEY IN THE FOREST BESIDE A STREAM TUMBLING ALONG A ROCKY BED.
Later on, you would have to figure all sorts of tricks to survive. The snake you encountered, for instance, could only be dealt with by releasing a bird you’d picked up along the way. The bird would attack the snake, and you’d be free to pass. Each “room” of the adventure was like a computer subroutine, presenting a logical problem you’d have to solve.
In a sense, Adventure was a metaphor for computer programming itself—the deep recesses you explored in the Adventure world were akin to the basic, most obscure levels of the machine that you’d be traveling in when you hacked in assembly code. You could get dizzy trying to remember where you were in both activities. Indeed, Adventure proved as addicting as programming— Woods put the program on the SAIL PDP-10 on a Friday, and some hackers (and real-world “tourists”) spent the entire weekend trying to solve it. Like any good system or program, of course, Adventure was never finished—Woods and his friends were always improving it, debugging it, adding more puzzles and features. And like every significant program, Adventure was expressive of the personality and environment of the authors. For instance, Woods’ vision of a mist-covered toll bridge protected by a stubborn troll came during a break in hacking one night, when Woods and some other hackers decided to watch the sun rise at a mist-shrouded Mount Diablo, a substantial drive away. They didn’t make it in time, and Woods remembered what that misty dawn looked like and wrote it into the description of that scene in the game, which he conceived of over breakfast that morning.
It was at Stanford that gurus were as likely to be faculty people as systems hackers (among Stanford professors was the noted computer scientist Donald Knuth, author of the multivolume classic The Art of Computer Programming). It was at Stanford that, before the Adventure craze, the casual pleasures of Spacewar were honed to a high art (Slug Russell had come out with McCarthy, but it was younger hackers who developed five-player versions and options for reincarnation, and ran extensive all-night tournaments). It was at Stanford that hackers would actually leave their terminals for a daily game of volleyball. It was at Stanford that a fund-raising drive was successfully undertaken for an addition to the lab, which would have been inconceivable at MIT: a sauna. It was at Stanford that the computer could support video images, allowing users to switch from a computer program to a television program. The most famous use of this, according to some SAIL regulars, came when SAIL hackers placed an ad in the campus newspaper for a couple of willing young coeds. The women answering the ad became stars of a sex orgy at the AI lab, captured by a video camera and watched over the terminals by appreciative hackers. Something else that never would have occurred at MIT.
It was not as if the SAIL hackers were any less devoted to their hacking than the MIT people. In a paper summarizing the history of the Stanford lab, Professor Bruce Buchanan refers to the “strange social environment created by intense young people whose first love was hacking,” and it was true that the lengths that hackers went to in California were no less extreme than those at Tech Square. For instance, it did not take long for SAIL hackers to notice that the crawl space between the low-hanging artificial ceiling and the roof could be a comfortable sleeping hutch, and several of them actually lived there for years. One systems hacker spent the early 1970s living in his dysfunctional car parked in the lot outside the building—once a week he’d bicycle down to Palo Alto for provisions. The other alternative for food was the Prancing Pony; named after a tavern in Middle Earth, this was the SAIL food-vending machine, loaded with health-food goodies and potstickers from a local Chinese restaurant. Each hacker kept an account on the Prancing Pony, maintained by the computer. After you made your food purchase, you were given the option to double-or-nothing the cost of your food, the outcome depending on whether it was an odd-or even-numbered millisecond when you made the gamble. With those kinds of provisions, SAIL was even more amenable than MIT for round-the-clock hacking. It had its applications people and its systems people. It was open to outsiders, who would sit down and begin hacking; and if they showed promise, Uncle John McCarthy might hire them.
SAIL hackers also lived by the Hacker Ethic. The time-sharing system on the SAIL machine, like ITS, did not require passwords, but, at John McCarthy’s insistence, a user had the option to keep his files private. The SAIL hackers wrote a program to identify these people, and proceeded to unlock the files, which they would read with special interest. “Anybody that’s asking for privacy must be doing something interesting,” SAIL hacker Don Woods would later explain.
Likewise, SAIL was in no way inferior to MIT in doing important computer work. Just like their counterparts at MIT’s AI lab, SAIL hackers were robotics fans, as implied by the sign outside SAIL: CAUTION, ROBOT VEHICLE. It was John McCarthy’s dream to have a robot leave the funky AI lab and travel the three miles to campus under its own physical and mental power. At one point, presumably by mistake, a robot got loose and was careening down the hill when, fortunately, a worker driving to the lab spotted it and rescued it. Various hackers and academics worked at SAIL in important planner fields like speech understanding and natural language studies. Some of the hackers got heavily involved in a computer music project that would break ground in that field.
Stanford and other labs, whether in universities like CarnegieMellon or research centers like Stanford Research Institute, became closer to each other when ARPA linked their computer systems through a communications network. This “ARPAnet” was very much influenced by the Hacker Ethic, in that among its values was the belief that systems should be decentralized, encourage exploration, and urge a free flow of information. From a computer at any “node” on the ARPAnet, you could work as if you were sitting at a terminal of a distant computer system. Hackers from all over the country could work on the ITS system at Tech Square, and the hacker values implicit in that were spreading. People sent a tremendous volume of electronic mail to each other, swapped technical esoterica, collaborated on projects, played Adventure, formed close hacker friendships with people they hadn’t met in person, and kept in contact with friends at places they’d previously hacked. The contact helped to normalize hackerism, so you could find hackers in Utah speaking in the peculiar jargon developed in the Tool Room next to the Tech Model Railroad Club.
Yet even as the Hacker Ethic grew in the actual number of its adherents, the MIT hackers noted that outside of Cambridge things were not the same. The hackerism of Greenblatt, Gosper, and Nelson had been directed too much toward creating one Utopia, and even the very similar offshoots were, by comparison, losing in various ways.
“How could you go to California, away from the action?” people would ask those who went to Stanford. Some left because they tired of the winner-loser dichotomy on the ninth floor, though they would admit that the MIT intensity was not in California. Tom Knight, who hacked at Stanford for a while, used to say that you couldn’t really do good work at Stanford.
David Silver went out there, too, and concluded that “the people at Stanford were kind of losers in their thinking. They weren’t as rigorous in certain ways and they sort of were more fun-loving. One guy was building a race car and another was building an airplane in the basement . . .” Silver himself got into hardware at Stanford when he built an audio switch to allow people working at their terminals to listen to any of sixteen channels, from radio stations to a SAIL public-address system. All the choices, of course, were stored within the SAIL PDP-6. And Silver thinks that exposure to the California style of hacking helped loosen him up, preparing him to make the break from the closed society of the ninth floor.
The defection of Silver and the other MIT hackers did not cripple the lab. New hackers came to replace them. Greenblatt and Gosper remained, as did Knight and some other canonical hackers. But the terrifically optimistic energy that came with the opening explosion of AI research, of setting up new software systems, seemed to have dissipated. Some scientists were complaining that the boasts of early AI planners were not fulfilled. Within the hacker community itself, the fervid habits and weird patterns established in the past decade seemed to have solidified. Were they ossified as well? Could you grow old as a hacker, keep wrapping around to those thirty-hour days? “I was really proud,” Gosper would say later, “of being able to hack around the clock and not really care what phase of the sun or moon it was. Wakeup and find it twilight, have no idea whether it was dawn or sunset.” He knew, though, that it could not go on forever. And when it could not, when there was no Gosper or Greenblatt wailing away for thirty hours, how far would the hacker dream go? Would the Golden Age, now drawing to its close, really have meant anything?
• • • • • • • •
It was in 1970 that Bill Gosper began hacking LIFE. It was yet another system that was a world in itself, a world where behavior was “exceedingly rich, but not so rich as to be incomprehensible.” It would obsess Bill Gosper for years.
LIFE was a game, a computer simulation developed by John Conway, a distinguished British mathematician. It was first described by Martin Gardner, in his “Mathematical Games” column in the October 1970 issue of Scientific American. The game consists of markers on a checkerboard-like field, each marker representing a “cell.” The pattern of cells changes with each move in the game (called a “generation”), depending on a few simple rules—cells die, are born, or survive to the next generation according to how many neighboring cells are in the vicinity. The principle is that isolated cells die of loneliness, and crowded cells die from overpopulation; favorable conditions will generate new cells and keep old ones alive. Gardner’s column talked of the complexities made possible by this simple game and postulated some odd results that had not yet been achieved by Conway or his collaborators.
Gosper first saw the game when he came into the lab one day and found two hackers fooling around with it on the PDP-6. He watched for a while. His first reaction was to dismiss the exercise as not interesting. Then he watched the patterns take shape a while longer. Gosper had always appreciated how the specific bandwidth of the human eyeball could interpret patterns; he would often use weird algorithms to generate a display based on mathematical computations. What would appear to be random numbers on paper could be brought to life on a computer screen. A certain order could be discerned, an order that would change in an interesting way if you took the algorithm a few iterations further, or alternated the x and y patterns. It was soon clear to Gosper that LIFE presented these possibilities and more. He began working with a few AI workers to hack LIFE in an extremely serious way. He was to do almost nothing else for the next eighteen months.
The group’s first effort was to try to find a configuration in the LIFE universe, which was possible in theory but had not been discovered. Usually, no matter what pattern you began with, after a few generations it would peter out to nothing or revert to one of a number of standard patterns named after the shape that the collection of cells formed. The patterns included the beehive, honey farm (four beehives), spaceship, powder keg, beacon, Latin cross, toad, pinwheel, and swastika. Sometimes, after a number of generations, patterns would alternate, flashing between one and the other: these were called oscillators, traffic lights, or pulsars. What Gosper and the hackers were seeking was called a glider gun. A glider was a pattern which would move across the screen, periodically reverting to the same pointed shape. If you ever created a LIFE pattern, which actually spewed out gliders as it changed shape, you’d have a glider gun, and LIFE’s inventor, John Conway, offered fifty dollars to the first person who was able to create one.
The hackers would spend all night sitting at the PDP-6’s highquality “340” display (a special, high-speed monitor made by DEC), trying different patterns to see what they’d yield. They would log each “discovery” they made in this artificial universe in a large black sketchbook, which Gosper dubbed the LIFE scrapbook. They would stare at the screen as, generation by generation, the pattern would shift. Sometimes it looked like a worm snapping its tail between sudden reverses, as if it were alternating between itself and a mirror reflection. Other times, the screen would eventually darken as the cells died from aggregate overpopulation, then isolation. A pattern might end with the screen going blank. Other times things would stop with a stable “still life” pattern of one of the standards. Or things would look like they were winding down, and one little cell thrown off by a dying “colony” could reach another pattern and this newcomer could make it explode with activity. “Things could run off and do something incredibly random,” Gosper would later recall of those fantastic first few weeks, “and we couldn’t stop watching it. We’d just sit there, wondering if it was going to go on forever.”
As they played, the world around them seemed connected in patterns of a LIFE simulation. They would often type in an arbitrary pattern such as the weaving in a piece of clothing, or a pattern one of them discerned in a picture or a book. Usually what it would do was not interesting. But sometimes they would detect unusual behavior in a small part of a large LIFE pattern. In that case they would try to isolate that part, as they did when they noticed a pattern that would be called “the shuttle,” which would move a distance on the screen, then reverse itself. The shuttle left behind some cells in its path, which the hackers called “dribbles.” The dribbles were “poison” because their presence would wreak havoc on otherwise stable LIFE populations.
Gosper wondered what might happen if two shuttles bounced off each other, and figured that there were between two and three hundred possibilities. He tried out each one, and eventually came across a pattern that actually threw off gliders. It would move across the screen like a jitterbugging whip, spewing off limp boomerangs of phosphor. It was a gorgeous sight. No wonder this was called LIFE—the program created life itself. To Gosper, Conway’s simulation was a form of genetic creation, without the vile secretions and emotional complications associated with the real world’s version of making new life. Congratulations—you’ve given birth to a glider gun!
Early the next morning Gosper made a point of printing out the coordinates of the pattern that resulted in the glider gun, and rushed down to the Western Union office to send a wire to Martin Gardner with the news. The hackers got the fifty dollars.
This by no means ended the LIFE craze on the ninth floor. Each night, Gosper and his friends would monopolize the 340 display running various LIFE patterns, a continual entertainment, exploration, and journey into alternate existence. Some did not share their fascination, notably Greenblatt. By the early seventies, Greenblatt had taken more of a leadership role in the lab. He seemed to care most about the things that had to be done, and after being the de facto caretaker of the ITS system he was actively trying to transform his vision of the hacker dream into a machine that would embody it. He had taken the first steps in his “chess machine,” which responded with a quickness unheard of in most computers. He was also trying to make sure that the lab itself ran smoothly so that hacking would progress and be continually interesting.
He was not charmed by LIFE. Specifically, he was unhappy that Gosper and the others were spending “unbelievable numbers of hours at the console, staring at those soupy LIFE things” and monopolizing the single 340 terminal. Worst of all, he considered the program they were using as “clearly nonoptimal.” This was something the LIFE hackers readily admitted, but the LIFE case was the rare instance of hackers tolerating some inefficiency.
They were so thrilled at the unfolding display of LIFE that they did not want to pause even for the few days it might take to hack up a better program. Greenblatt howled in protest—“the heat level got to be moderately high,” he later admitted—and did not shut up until one of the LIFE hackers wrote a faster program, loaded with utilities that enabled you to go backward and forward for a specified number of generations, focus in on various parts of the screen, and do all sorts of other things to enhance exploration.
Greenblatt never got the idea. But to Gosper, LIFE was much more than your normal hack. He saw it as a way to “basically do science in a new universe where all the smart guys haven’t already nixed you out two or three hundred years ago. It’s your life story if you’re a mathematician: every time you discover something neat, you discover that Gauss or Newton knew it in his crib. With LIFE you’re the first guy there, and there’s always fun stuff going on. You can do everything from recursive function theory to animal husbandry. There’s a community of people who are sharing these experiences with you. And there’s the sense of connection between you and the environment. The idea of where’s the boundary of a computer. Where does the computer leave off and the environment begin?”
Obviously, Gosper was hacking LIFE with near-religious intensity. The metaphors implicit in the simulation—of populations, generations, birth, death, survival—were becoming real to him. He began to wonder what the consequences would be if a giant supercomputer were dedicated to LIFE . . . and imagined that eventually some improbable objects might be created from the pattern. The most persistent among them would survive against odds which Gosper, as a mathematician, knew were almost impossible. It would not be randomness which determined survival, but some sort of computer Darwinism. In this game that is a struggle against decay and oblivion, the survivors would be the “maximally persistent states of matter.” Gosper thought that these LIFE forms would have contrived to exist—they would actually have evolved into intelligent entities.
“Just as rocks wear down in a few billion years, but DNA hangs in there,” he’d later explain. “This intelligent behavior would be just another one of those organizational phenomena like DNA which contrived to increase the probability of survival of some entity. So one tends to suspect, if one’s not a creationist, that very very large LIFE configurations would eventually exhibit intelligent [characteristics]. Speculating what these things could know or could find out is very intriguing . . . and perhaps has implications for our own existence.”
Gosper was further stimulated by Ed Fredkin’s theory that it is impossible to tell if the universe isn’t a computer simulation, perhaps being run by some hacker in another dimension. Gosper came to speculate that in his imaginary ultimate LIFE machine, the intelligent entities which would form over billions of generations might also engage in those very same speculations. According to the way we understand our own physics, it is impossible to make a perfectly reliable computer. So when an inevitable bug occurred in that super-duper LIFE machine, the intelligent entities in the simulation would have suddenly been presented with a window to the metaphysics which determined their own existence. They would have a clue to how they were really implemented. In that case, Fredkin conjectured, the entities might accurately conclude that they were part of a giant simulation and might want to pray to their implementors by arranging themselves in recognizable patterns, asking in readable code for the implementors to give clues as to what they’re like. Gosper recalls “being offended by that notion, completely unable to wrap my head around it for days, before I accepted it.”
He accepted it.
Maybe it is not so surprising. In one sense, that far-flung conjecture was already reality. What were the hackers but gods of information, moving bits of knowledge around in cosmically complex patterns within the PDP-6? What satisfied them more than this power? If one concedes that power corrupts, then one might identify corruption in the hackers’ failure to distribute this power— and the hacker dream itself—beyond the boundaries of the lab. That power was reserved for the winners, an inner circle that might live by the Hacker Ethic but made little attempt to widen the circle beyond those like themselves, driven by curiosity, genius, and the Hands-On Imperative.
Not long after his immersion in LIFE, Gosper himself got a glimpse of the limits of the tight circle the hackers had drawn. It happened in the man-made daylight of the 1972 Apollo 17 moon shot. He was a passenger on a special cruise to the Caribbean, a “science cruise” timed for the launch, and the boat was loaded with sci-fi writers, futurists, scientists of varying stripes, cultural commentators, and, according to Gosper, “an unbelievable quantity of just completely empty-headed cruise-niks.”
Gosper was there as part of Marvin Minsky’s party. He got to engage in discussion with the likes of Norman Mailer, Katherine Anne Porter, Isaac Asimov, and Carl Sagan, who impressed Gosper with his Ping-Pong playing. For real competition, Gosper snuck in some forbidden matches with the Indonesian crewmen, who were by far the best players on the boat.
Apollo 17 was to be the first manned space shot initiated at night, and the cruise boat was sitting three miles off Cape Kennedy for an advantageous view of the launch. Gosper had heard all the arguments against going to the trouble of seeing a liftoff—why not watch it on television, since you’ll be miles away from the actual launching pad? But when he saw the damn thing actually lift off, he appreciated the distance. The night had been set ablaze, and the energy peak got to his very insides. The shirt slapped on his chest, the change in his pocket jingled, and the PA system speakers broke from their brackets on the viewing stand and dangled by their power cords. The rocket, which of course never could have held to so true a course without computers, leapt into the sky, hell-bent for the cosmos like some flaming avenger, a Spacewar nightmare; the cruise-niks were stunned into trances by the power and glory of the sight. The Indonesian crewmen went berserk. Gosper later recalled them running around in a panic and throwing their PingPong equipment overboard, “like some kind of sacrifice.”
The sight affected Gosper profoundly. Before that night, Gosper had disdained NASA’s human-wave approach toward things. He had been adamant in defending the AI lab’s more individualistic form of hacker elegance in programming, and in computing style in general. But now he saw how the real world, when it got its mind made up, could have an astounding effect. NASA had not applied the Hacker Ethic, yet it had done something the lab, for all its pioneering, never could have done. Gosper realized that the ninth-floor hackers were in some sense deluding themselves, working on machines of relatively little power compared to the computers of the future—yet still trying to do it all, change the world right there in the lab. And since the state of computing had not yet developed machines with the power to change the world at large—certainly nothing to make your chest rumble as did the NASA operation—all that the hackers wound up doing was making Tools to Make Tools. It was embarrassing.
Gosper’s revelation led him to believe that the hackers could change things—just make the computers bigger, more powerful, without skimping on expense. But the problem went even deeper than that. While the mastery of the hackers had indeed made computer programming a spiritual pursuit, a magical art, and while the culture of the lab was developed to the point of a technological Walden Pond, something was essentially lacking.
The world.
As much as the hackers tried to make their own world on the ninth floor, it could not be done. The movement of key people was inevitable. And the harsh realities of funding hit Tech Square in the seventies: ARPA, adhering to the strict new Mansfield Amendment passed by Congress, had to ask for specific justification for many computer projects. The unlimited funds for basic research were drying up; ARPA was pushing some pet projects like speech recognition (which would have directly increased the government’s ability to mass-monitor phone conversations abroad and at home). Minsky thought the policy was a “losing” one, and distanced the AI lab from it. But there was no longer enough money to hire anyone who showed exceptional talent for hacking. And slowly, as MIT itself became more ensconced in training students for conventional computer studies, the Institute’s attitude to computer studies shifted focus somewhat. The AI lab began to look for teachers as well as researchers, and the hackers were seldom interested in the bureaucratic hassles, social demands, and lack of hands-on machine time that came with teaching courses.
Greenblatt was still hacking away, as was Knight, and a few newer hackers were proving themselves masters at systems work . . . but others were leaving, or gone. Now, Bill Gosper headed West. He arranged to stay on the AI lab payroll, hacking on the ninth-floor PDP-6 via the ARPAnet, but he moved to California to study the art of computer programming with Professor Donald Knuth at Stanford. He became a fixture at Louie’s, the best Chinese restaurant in Palo Alto, but was missing in action at Tech Square. He was a mercurial presence on computer terminals there but no longer a physical center of attention, draped over a chair, whispering, “Look at that,” while the 340 terminal pulsed insanely with new forms of LIFE. He was in California, and he had bought a car.
With all these changes, some of the hackers sensed that an era was ending. “Before (in the sixties], the attitude was, ‘Here’s these new machines, let’s see what they can do.’” hacker Mike Beeler later recalled. “So we did robot arms, we parsed language, we did Spacewar . . . now we had to justify according to national goals. And (people pointed out that] some things we did were curious, but not relevant . . . werealized we’d had a Utopian situation; all this fascinating culture. There was a certain amount of isolation and lack of dissemination, spreading the word. I worried that it was all going to be lost.”
It would not be lost. Because there was a second wave of hackers, a type of hacker who not only lived by the Hacker Ethic but saw a need to spread that gospel as widely as possible. The natural way to do this was through the power of the computer, and the time to do it was now. The computers to do it would have to be small and cheap—making the DEC minicomputers look like IBM Hulking Giants by comparison. But small and powerful computers in great numbers could truly change the world. There were people who had these visions, and they were not the likes of Gosper or Greenblatt: they were a different type of hacker, a second generation, more interested in the proliferation of computers than in hacking mystical AI applications. This second generation was hardware hackers, and the magic they would make in California would build on the cultural foundation set by the MIT hackers to spread the hacker dream throughout the land.
Copyright ©2010-2022 比特日记 All Rights Reserved.