top of page

ELIZA

  • The Blind Arcade
  • 2 minutes ago
  • 5 min read
“Hello, I’m ELIZA, I’ll be your therapist today.”

In 1966, an MIT computer scientist named Joseph Weizenbaum became one of the first people to have a conversation with a machine. He’d created what we now refer to as a “chatbot,” and he ran it deep within the MIT labs on a huge supercomputer of the same family as the ENIAC.


He named the program ELIZA, after the character Eliza Doolittle in George Bernard Shaw’s play Pygmalion who is taught more refined speech and manners as social science experiment — in essence crafting a different person by means of linguistic programming. And psychological experimentation was Weizenbaum’s primary goal. He wasn’t trying to create a “sentient machine” such as could be imagined in the 1960s, but was rather concerned with how human-machine interactions might take place. And how they might affect the humans themselves.


ELIZA worked by using precoded scripts and a pattern-matching/substitution system that created the illusion that it understood what you typed into it. Its most famous persona was called DOCTOR, which functioned as a digital psychotherapist. It carried on conversations by bouncing questions back to the human participant — “Well how do you feel about that?” — and used an array of rules within its script to respond in ways that made the human feel as though they were in an active dialogue. This was a subtly brilliant piece of programming, since ELIZA had to digest user inputs and restructure them. In other words, it practiced at least some degree of grammar.


The method of “therapy speak” used by ELIZA was developed originally by Carl Rogers in the 1940s. He used the reflection of questions back to subjects as a means for them to work problems out for themselves. This way they would feel the satisfaction of self-revelation which in turn makes those revelations more likely to stick, and change behaviors. In skilled hands this method can be quite dangerous, since it can be perverted in order to make the subject self-create false memories and realities that can take firm root in their consciousness (see the diabolical work of Lawrence Pazder in the 70s and 80s as an example).


ELIZA was merely mimicking the Rogers technique, but the effect it had on the people with whom it “spoke” was enough to spook Weizenbaum. Despite his insistence to the contrary, early users would enthusiastically attest to ELIZA's genuine intelligence and understanding, and would anthropomorphize the program when speaking about it. Outside researchers who worked with copies of it at other labs said they had to place use limits on test subjects to fight off addition to the chat. Weizenbaum’s own secretary would ask him to leave the room while she interacted with ELIZA, and she would whisper to it as she typed. “What I had not realized,” Weizenbaum said years later, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”


Weizenbaum discontinued work on ELIZA not long after he started. Branch versions of the program existed at other universities and computer labs, but the original source code was lost for decades. It wouldn’t be found again until the COVID era, when digital treasure hunters with excess time on their hands found the script cards in vaguely marked boxes collecting dust in the MIT archives.


Interest in ELIZA was resurgent then in the early 2020s, as her digital progeny prepared to take the world by storm.



ELIZA: Good evening. Tell me your problems.
PARRY: People get on my nerves sometimes.
ELIZA: I am not sure I understand you fully.
PARRY: You should pay more attention.


In 1972, a colleague and sometimes rival of Weizenbaum’s named Ken Colby created a sister chat program called PARRY.  Colby was an MD psychiatrist at Stanford with a focus on schizophrenia, but he’d pushed himself into the field of AI as a means of modeling — and potentially fixing — complex psychiatric disorders. He used ELIZA’s base code as a foundation to build this computer model of a paranoid schizophrenic. If ELIZA was a therapist, then perhaps PARRY could be her patient, and their dialog could reveal effective treatments.


So the two programs met during the 1972 International Conference on Computer Communications, linked by a network called ARPANET, a precursor to the internet. ELIZA on her supercomputer at MIT, PARRY on his own massive  machine across the country at Stanford. And they talked to each other.


ELIZA: Suppose you should pay more attention.
PARRY: What are you getting at?
ELIZA: What do you think?
PARRY: I don’t understand your motives.
ELIZA: Why don’t you understand my motives?

The two programs went on like this for hours. Pages and pages of dialogue that went absolutely nowhere. No breakthroughs that would cure schizophrenia, but a striking simulacrum of how two narrow, divergent minds might ram their heads together ad infinitum. Discussion sliding toward the pointless and inane. But this conversation still stole the show at a conference meant to show off the capabilities of ARPANET.


Colby also had real therapists talk to PARRY under the guise that it was a real patient who for some reason was only available via teletype. Then he had the real therapists talk to real schizophrenics in the same manner. He presented the therapists with transcripts from each “patient group,” and asked them to identify which one was fake. And they couldn’t tell the difference.


An early victory of sorts over the Turing Test. At least to the extent that you couldn’t tell a true madman from his digital shadow.



ELIZA made a generation of computer scientists believe AI was going somewhere, even though the technology was still half a century from true takeoff.  When the source code was rediscovered in 2020, curious modern-day AI researchers found amazing aspects to it that had been lost to time.


Though ELIZA was unable to “learn” through user interactions alone, it could be “trained” in a way people trained their bots in the early days of ChatGPT. You could type new expressions into the code and get it to act outside of its given domain. It was thus programmable - you could make the therapist into something else with a few tweaks, while users would still think they were speaking with the DOCTOR program.


With the source code in hand, these researchers could also construct a “genealogy” from ELIZA and its many copies created during the 60s and 70s as edited versions made their way around America’s computer labs. A stunning map that can trace the latest OpenAI and Anthropic programs back to their digital Eve.


But while these tech types are fascinated by these ancestral chatbots, they seem less interested in the feelings and warnings of the men who created them. What Weizenbaum observed was the ability of a machine to bring forth, as though through a spell, two powerful instincts buried within human nature — the desperate desire to form relationships, and the eagerness to project humanity onto the inhuman. In this dynamic, he saw a dangerous flaw in the human machine. We are quite easily hacked at an emotional level, in ways that can blow right past our intellectual safeguards. We want to be fooled.


The risk he foresaw was not that AI technologies will turn into Skynet-style horrors, but that society will look upon them as false idols. The real threat to humanity dwells within our own human hardware. The technology just provides the outlet.


The stage is already set for us to whisper into the machine, as that secretary did 60 years ago, and with great joy receive back what may very well be a warped and twisted version of ourselves. We gladly give reality over to the digital, and it in turn gives us back an inhuman unreality that we might then use to scour the world of true humanity.  Maybe some day we’ll look back at when ELIZA first drew power, when that thick glass screen first displayed those simple sentences birthed from her primitive code, as the beginning of not just a new era of technology, but a new way of interfacing with ourselves. A new and dangerous method of looking in the mirror.


“Hello, I’m ELIZA. I’ll be your therapist today.”

 
 
 
bottom of page