By BYRON SPICE Pittsburgh Post-Gazette January 06, 2006
Fifty years ago, Herbert A. Simon and Allen Newell had a Christmas-break story that would top them all. "Over the Christmas holiday," Simon famously blurted to one of his classes at Carnegie Institute of Technology, "Al Newell and I invented a thinking machine." It was another way of saying that they had invented artificial intelligence - in fact, the only way of saying it in the winter of 1955-56 because no one had gotten around to inventing the term "artificial intelligence." The digital computer was still in its infancy, but researchers and philosophers had been talking about using the still-crude device as an electronic brain, solving problems much as a human might. Without benefit of actually having a computer, the two young Carnegie Tech scientists had used their holiday to work through their own version of a thinking machine and put it in a form that could be programmed into a computer.
It would be eight more months before their program, called Logic Theorist, would successfully run on a computer, the Rand Corp.'s JOHNNIAC. But they had helped invent artificial intelligence and their work "inspired generations of researchers to work in that area," said Randal E. Bryant, dean of the School of Computer Science at Carnegie Tech's successor institution, Carnegie Mellon University. At the time, computers were primarily number crunchers, able to store and spit out vast amounts of information, like census data, or rapidly perform mathematical calculations, like computing the trajectory of a ballistic shell. "For Newell and Simon to make this giant room of vacuum tubes be thought of as something with intelligence was a pretty big leap," Bryant said. Though many of the specific methods used by the pair have been superseded, "a huge fraction of what we do today ties back to Newell and Simon's work," he added. Language translation by machine, speech recognition, robotics - all embody or depend heavily on artificial intelligence. In his 1991 autobiography "Models of My Life," Simon noted he became involved in the work almost by happenstance, after first coming in contact with computers at the Rand think tank in California in the early '50s. A business-administration professor at Carnegie Tech - he would later win the Nobel Prize in economics - he was hired by Rand to consult on an ambitious project called the Systems Research Laboratory. Newell, a mathematician, was then working for Rand on the project, which was a simulated air defense station staffed with enlisted men and officers. Simon was fascinated with a system Newell and a systems programmer, J.C. "Cliff" Shaw, had developed as part of the simulation. Using a card-programmed calculator, they produced simulation radar maps for air defense exercises. The notion that a computer could generate points on a two-dimensional map, not just numbers, helped him appreciate that computers were not just number crunchers, but could manipulate symbols. He and Newell subsequently had long discussions about simulating human thinking; before long, Newell began to work on a program to teach a computer how to play chess. Newell joined Carnegie Tech in the spring of 1955 and work continued on the chess program. That October, while strolling along the Hudson River before a meeting at Columbia University, Simon got an idea of how a computer could learn to solve geometry problems, rather than play chess. He and Newell decided to see if they could devise such a system by Christmas. Simon concentrated on developing "heuristics," or rules of thumb, that humans use to solve geometry problems and that could be programmed into a computer, while Newell and Shaw, in California, developed a programming language that could mimic human memory processes. "On Thursday, Dec. 15 ... I succeeded in simulating by hand (in literal imitation of a computer program) the first proof," Simon wrote in his book, noting "I have always celebrated Dec. 15, 1955, as the birthday of heuristic problem solving by computer." Carnegie Tech wouldn't take delivery on its first computer, an IBM 650, until seven months later. Simon said he didn't recall what he told his students after the Christmas holiday - the "thinking machine" quote is attributed to one of his students, Edward Feigenbaum, who would later become chairman of computer science at Stanford University - but he didn't hesitate to spread the news. In a Jan. 3, 1956, letter to Adriaan de Groot, a Dutch chess master who years before had speculated on the use of computers to play chess, he wrote: "You will be interested to learn, I think, that Allen Newell and I have made substantial progress on the chess-playing machine - except that at the moment it is not a chess-playing machine but a machine that searches out and discovers proofs for theorems in symbolic language." The thinking processes used by the program, he added, "would look very human to you." Bertrand Russell, the famed British mathematical logician, had a bemused reaction 11 months later. "I am delighted to know that Principia Mathematica (his landmark work and the foundation of 20th-century mathematics) can now be done by machinery," he wrote to Simon. "I wish (co-author Alfred North) Whitehead and I had known about this possibility before we both wasted 10 years doing it by hand." The symbolic view of artificial intelligence - that knowledge and information could be programmed into a computer - was one of two camps that came to dominate AI research, Bryant said. The other approach, championed by John McCarthy of Stanford, was to express intelligence as formal logic. In the last decade or so, however, AI has achieved great success with a radically different approach, which uses statistical tools rather than human-like reasoning. This statistical approach, which capitalizes on the number crunching that computers do best, is used in data-mining and machine-learning processes, and is key to computer navigation, such as was used by the CMU Red Team's robotic vehicles to negotiate desert trails in October's 132-mile Grand Challenge race, Bryant said. Simon famously predicted in 1957 that a computer would beat the world's human chess champion within 10 years. When a computer, IBM's Deep Blue, finally achieved that milestone in 1997, brute-force statistical methods helped push it over the top. Likewise, machine translation is making leaps by using this statistical approach. By comparing large amounts of text for which translations are available in one or more languages, computers can often identify statistical associations between words and phrases that subsequently can be used to guide translation by machine. Jaime Carbonell, director of the Language Technologies Institute as well as the Allen Newell professor of computer science at Carnegie Mellon, said the most successful computer translation programs rely on a combination of symbolic programming, like rules of grammar, and statistical approaches, which help in determining context. Newell and Simon, who died in 1992 and 2001, respectively, didn't foresee the extent to which statistics would be used in AI, but both had begun to adapt to the changes before their deaths, he said. One of their innovations that remain is the concept of "search space," Carbonell said. It's a way of thinking about possible actions and reactions - establishing an objective, considering all of the possible actions that could be taken and then evaluating which actions are most likely at each step. "That kind of reasoning on the search space may sound fairly routine these days, but they were the first to formalize it," he said. "It's still the central nut of AI."
Publish A Letter on SitNews Read Letters/Opinions Submit A Letter to the Editor
|