Ready to Start?

Accept Assignment

Due: January 19, 2026 at 11:59 PM EST

Assignment 1: Building the ELIZA Chatbot

Overview

In this assignment, you will build a simplified version of ELIZA, one of the earliest programs to mimic human conversation. Created by Joseph Weizenbaum at MIT in 1964-1966, ELIZA was designed to simulate a session with a non-directive (Rogerian) psychotherapist using simple pattern matching and string manipulation.

But this assignment goes beyond just implementing a chatbot. You'll also analyze what makes ELIZA feel conversational despite its simplicity, compare it to modern chatbots, and reflect on the psychological phenomenon known as the "ELIZA effect"—the tendency for people to attribute human-like understanding to computer programs.

This is your first assignment in the course, and it's designed to introduce you to fundamental concepts in natural language processing: pattern matching, text manipulation, rule-based systems, and the critical distinction between appearing intelligent and actually understanding language.

Vibe coding is encouraged!

For this assignment, it is possible to code everything from scratch in the allotted assignment time (1 week). However, I strongly recommend that you use a coding agent to help you implement the core components. Some suggestions are provided below. If you're already familiar with vibe coding, then feel free to use whatever approach you're comfortable with. If you're new to vibe coding, or if you'd like to try something different, what works well for me is:

  1. Start with a highly detailed description of exactly what you want your coding agent to make. I would suggest including the assignment as a reference file to provide additional instructions and context. But to increase the chances of working code, hallucinations, and other quality issues, you should explain as clearly as possible precisely what the core algorithms are, how the work should be approached, and so on. In order to create this document, it is critical that you have a detailed and comprehensive understanding of what the solution should look like. You don't necessarily need to know how to code it yourself (LLMs are fantastic at writing code), but you do need to know exactly what each function will do (i.e., what are the inputs, what are the outputs, and how can you convince yourself that it's working-- including tricky edge cases). It helps to include examples to illustrate how each component should work.
  2. Next, pass your description to your coding agent, and ask it to come up with a detailed technical design document. You should review this in detail, edit carefully, and iterate (with help from the coding agent, other LLMs, web searches, and your own intuitions) until you are 100% happy with the design. Ideally the design should contain skeleton code and/or code snippets showing exactly how each component of your project will be implemented.
  3. Once you have your technical design document, the next step is to construct a detailed implementation plan. Given your target (i.e., your technical design document) ask your coding agent to draft a plan for how to implement it. You can use both the assignment instructions and your technical design document as context. Since implementation plans can get lengthy, you may be pushing up against the context limits of your coding agent of choice. A nice "trick" is to break your task into smaller sub-tasks, and then use agents to do each sub-task. Then no single instance of the coding agent needs to store the full code base and plan in its context. As with the technical design document, you should iterate until you are 100% happy with the plan. Importantly, you should include in your plan a way of verifying that everything is working correctly.
  4. Then let your model "loose" on the problem and have it draft a solution. Provide the assignment, technical design document, and implementation plan as context. It's highly likely that the first solution your coding agent comes up with will be wrong in important ways-- it might not run, it might not do what you asked, you might have had a conceptual bug in your understanding that propagated to the model's solution, and so on. That's where you should turn to the verification checks from your implementation plan. In addition to making sure those checks "pass," you should also (when you're creating chatbots or other interactive applications, like in this assignment) use it yourself. Pretend you're using your chatbot (e.g., pretend you're me, and you're trying to stress test the implementation). Does the code behave like you expect? Does the code break down in unexpected ways? Does it break down in expected ways? Are there any pieces that you don't understand fully?

Learning Objectives

By completing this assignment, you will:

  1. Implement a rule-based chatbot using pattern matching and text transformations
  2. Understand the mechanics of ELIZA including pre/post-substitutions, synonym handling, and decomposition/reassembly
  3. Analyze conversational patterns by testing ELIZA on diverse inputs
  4. Compare simple and complex approaches to conversational AI
  5. Explore psychological aspects of human-computer interaction (ELIZA effect, anthropomorphization)
  6. Reflect critically on what conversation actually requires vs. what creates the illusion of understanding
  7. Appreciate historical context of early AI and its relevance to modern systems

Background

The ELIZA Effect

When Weizenbaum first demonstrated ELIZA, he was shocked by people's reactions. His secretary, who watched him build the program, asked him to leave the room so she could talk to ELIZA privately. Users shared intimate details of their lives. Some refused to believe it was "just" a program following simple rules.

This phenomenon—the tendency to attribute human-like understanding to computer systems based on their outputs—became known as the ELIZA effect. It's particularly relevant today as we interact with increasingly sophisticated chatbots and AI systems.

How ELIZA Works

Unlike modern chatbots that use machine learning, ELIZA operates purely through:

  1. Pattern Matching: Identifying keywords and patterns in user input
  2. Text Transformations: Applying substitutions to normalize and manipulate text
  3. Template Responses: Using predefined templates to generate replies
  4. Context-Free Operation: No memory, no learning, no real understanding
Despite this simplicity, ELIZA can create surprisingly convincing conversational experiences, especially when mimicking a Rogerian psychotherapist who reflects questions back to the patient.

Why This Matters

Understanding ELIZA is crucial for understanding modern AI:

Part 1: Implementation

You will follow these steps to complete the assignment. Each part corresponds to specific functionality that the chatbot must have.

1. Read the Rules from a File

The chatbot will use a predefined file (instructions.txt) containing patterns, synonyms, and rules for text manipulations. You will need to write code to read and parse this file into an appropriate data structure that your program can use for:

2. Start with a Greeting

Once the chatbot starts, it should display one of the pre-defined greeting lines from the instructions.txt file. For example:
Welcome. What brings you here today?

3. Implement the Conversation Loop

Your chatbot should repeatedly ask for user input and respond based on pattern matching until the user types a quit command. The conversation ends when the user enters one of the quit keywords specified in the instructions file (e.g., bye, quit).

4. Pre-Substitutions

Before matching patterns, the chatbot should perform pre-substitutions. These substitutions map certain words to alternatives that are easier for the chatbot to handle. For example: You should implement a function that performs these substitutions on the user’s input before further processing.

5. Synonym Handling

Your chatbot should also handle synonym substitution. The rules file provides sets of synonyms that the chatbot should treat as equivalent for pattern matching purposes. For example, the words "sad", "unhappy", and "depressed" are considered synonymous.

You will need to implement synonym substitution, where the input is normalized before attempting to match any patterns. For example, if a user says, "I am unhappy," the chatbot should recognize that "unhappy" is synonymous with "sad."

6. Pattern Matching

Your chatbot needs to match user input against a set of predefined patterns (as specified in the instructions.txt file). Patterns contain: For example, if the user says, "I feel sad," a decomposition rule might match I feel *, and a reassembly rule might respond with, "Why do you feel sad?" You should:

7. Post-Substitutions

Once the chatbot has generated a response, it should perform post-substitutions to finalize the output. This step swaps certain words to ensure grammatical correctness. For example: For instance, if the user says, "I feel sad," and the chatbot's response is based on a pattern that reflects the user’s words back, the chatbot might generate, "Why do you feel sad?" It will then substitute "I" with "you" to respond with, "Why do you feel sad?"

8. Generate Responses and Continue the Loop

Once the chatbot generates a response, it should display the response and continue the conversation until the user types a quit command. The chatbot should have default responses if no patterns match the user input, such as:
I'm not sure I understand you fully. Can you elaborate?

9. Quit and Final Message

When the user inputs one of the quit keywords (e.g., bye or quit), the chatbot should print a final message, like:
That will be $200. See you again next week.

Example Conversation

Your chatbot should handle a conversation like the following:

Chatbot: Welcome. What brings you here today?
User: I am feeling sad.
Chatbot: Why do you feel sad?
User: I can't stop thinking about my family.
Chatbot: Tell me more about your family.
User: I have to go now.
Chatbot: That will be $200. See you again next week.

Implementation Requirements

Part 2: Analysis and Exploration

Once your ELIZA implementation is working, conduct the following analyses to understand what makes it tick and where it breaks down.

1. Conversation Testing (Required)

Test your ELIZA on at least 5 different conversation scenarios:

a) Typical therapy session: Discuss feelings, family, dreams (what ELIZA was designed for)

b) Technical/factual questions: Ask about facts, math problems, or technical information

c) Casual conversation: Try small talk about weather, hobbies, or daily life

d) Adversarial testing: Try to "break" ELIZA with unusual inputs, edge cases, or nonsense

e) Emotional depth: Explore a single topic in depth across multiple turns

For each conversation:

2. The ELIZA Effect Analysis (Required)

Reflect on the psychological aspects of interacting with ELIZA (500-1000 words in total:

a) When does ELIZA feel human? b) When does the illusion break? c) Modern parallels

3. Pattern Analysis (Required)

Analyze ELIZA's rule system (500-1000 words in total):

a) Pattern effectiveness b) Coverage gaps c) Substitution impact

4. Comparison with Modern Chatbots (Required)

Compare your ELIZA implementation with a modern chatbot (500-1000 words in total, excluding conversation excerpts):

a) Choose a comparison system b) Side-by-side testing c) Analysis

Part 3: Reflection and Insights (Required)

Write a thoughtful reflection (500-1000 words in total) addressing:

  1. What is conversation?
    • What does ELIZA reveal about the nature of conversation?
    • Can pattern matching alone constitute "conversation"?
    • What's missing from ELIZA that humans have?
  2. Understanding vs. simulation
    • Does ELIZA "understand" anything? Why or why not?
    • What would it take for a system to truly understand language?
    • How do we know if modern LLMs "understand" vs. just simulate?
  3. The gap to modern AI
    • What are the key limitations of rule-based approaches?
    • What fundamental advances enabled modern chatbots?
    • What problems remain unsolved even with modern systems?
  4. Ethical implications
    • Should users be informed they're talking to a bot?
    • What are the risks of systems that simulate understanding?
    • How do Weizenbaum's concerns apply to today's AI?

Part 4: Extensions (Optional Bonus)

Choose one or more extensions to enhance your ELIZA:

Extension 1: Advanced Pattern Matching

Implement improved pattern matching using: Compare the enhanced version with your original implementation.

Extension 2: Emotional State Tracking

Add a simple emotion tracking system:

Extension 3: Conversation Analytics

Build analytics for ELIZA conversations: Create visualizations of these metrics across multiple conversations.

Extension 4: Simple Transformer Comparison

Implement a minimal transformer-based chatbot:

Extension 5: Hybrid System

Create a hybrid system that combines: Document design decisions and performance improvements.

Deliverables

Submit a single Jupyter notebook that includes:

1. Implementation (40%)

2. Conversation Examples (20%)

3. Analysis (25%)

4. Reflection (10%)

5. Presentation (5%)

Optional Bonus

Evaluation Criteria

Technical Implementation (40 points)

Conversation Examples and Testing (20 points)

Analysis (25 points)

Reflection (10 points)

Presentation (5 points)

Total: 100 points (plus potential bonus)

Grading Scale

Tips for Success

Getting Started

  1. Read Weizenbaum's paper first: Understanding the original design will help immensely
  2. Start with simple patterns: Get basic matching working before tackling complex cases
  3. Test incrementally: Don't write everything at once—test each component separately
  4. Use print statements: Debug by printing what patterns match and why
  5. Study instructions.txt: Understand the format before parsing it
  6. Learn from a reference implementation: Use the reference implementation from our course, including the rule breakdown and rule editor tabs, to see how each component should behave.

Common Pitfalls to Avoid

  1. Incorrect pattern priority: Patterns with higher rank numbers should be checked first
  2. Substitution order: Pre-substitutions before pattern matching, post-substitutions after
  3. Wildcard matching: The * should match any sequence of words
  4. Synonym expansion: Remember to treat all synonyms as equivalent during matching
  5. Reassembly: Captured groups should be properly reinserted in responses
  6. Case sensitivity: Normalize case for matching but preserve it appropriately in output

Debugging Strategies

  1. Start tiny: Test with just 2-3 patterns before using the full instructions.txt
  2. Print everything: Show which pattern matched and which reassembly was selected
  3. Manual trace: Walk through a simple example by hand to verify logic
  4. Edge cases: Test empty input, very long input, special characters
  5. Compare outputs: Check your responses against online ELIZA implementations

Making Your Analysis Stand Out

  1. Be specific: Use concrete examples from your conversations
  2. Think critically: Don't just describe—analyze why things work or fail
  3. Make connections: Link ELIZA to modern AI systems and concepts
  4. Be honest: It's okay to note limitations and struggles
  5. Show creativity: Test ELIZA in interesting or unexpected ways

Time Management

This assignment is designed to be completed in one week (7 days). While substantial, it is achievable within a week with focused effort. Here's a suggested daily breakdown:

Key tip: These phases can overlap! While implementing, you can begin testing. While analyzing, you can refine your implementation. The daily breakdown above represents the primary focus for each phase, but iterating and moving between phases is normal and expected. Start early in the week and test incrementally as you build each component.

Resources and References

Essential Reading

  1. Weizenbaum, J. (1966). "ELIZA—A Computer Program For the Study of Natural Language Communication Between Man and Machine"
    • PDF Link
    • Read this first! It will help you understand the design and implementation

Additional Context

  1. Weizenbaum, J. (1976). "Computer Power and Human Reason: From Judgment to Calculation"
    • PDF Link
    • Weizenbaum's later reflections on ELIZA and AI ethics
    • Highly relevant to your reflection section
  2. Hofstadter, D. (1995). "Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought"
    • PDF Link
    • Chapter on the copycat program and understanding vs. simulation
  3. Turkle, S. (2011). "Alone Together: Why We Expect More from Technology and Less from Each Other"
    • PDF Link
    • Modern examination of human-computer emotional connections

Technical Resources

Modern Chatbot Comparisons

Course Materials

Going Deeper (Optional)

Submission Guidelines

GitHub Classroom Submission

This assignment is submitted via GitHub Classroom. Follow these steps:

  1. Accept the assignment: Click the accept assignment link
    • This creates your own private repository for the assignment (note the repository name!)
    • Template repository (your private version will be based on this template): github.com/ContextLab/eliza-llm-course
  2. Click the notebook (.ipynb file) to view it in GitHub
  3. Click the "Open in Colab" badge at the top. This will open a new Colaboratory session.
  4. Click the Copy to Drive button (at the top) to create an editable copy that you can work on.
  5. Make your edits to the notebook file in Google Colaboratory.
  6. Save your changes back to GitHub using File > Save a copy in GitHub:
    • Select the repository for your assignment from the dropdown menu
    • Change the File path to "Assignment1_ELIZA.ipynb" (i.e., remove the "Copy_of_" text at the beginning)
    • Add a note to the Commit message field, or leave it as the default
    • Make sure the "Include a link to Colab" box is checked
    • Press the "OK" button. This will sync your changes back to your GitHub repository.
  7. Verify submission: Check that your latest commit appears in your GitHub repository before the deadline
Deadline: January 19, 2026 at 11:59 PM EST

Notebook Requirements

  1. Runtime: The notebook must run from start to finish without errors in a fresh Colab session
  2. Dependencies: Include all imports and installations in the notebook
  3. Data: The instructions.txt file should be loaded in your notebook (code for doing this is in the template notebook provided with the assignment)
  4. Output: Keep cell outputs visible in your submission
  5. Deadline: January 19, 2026 at 11:59 PM EST

Before Submission Checklist

Academic Integrity

You are encouraged to: You must: Violations of academic integrity will result in a failing grade for the assignment.

Questions?

If you have questions about the assignment:
  1. Review this README and the Weizenbaum paper
  2. Check the Week 1 lecture materials
  3. Post in the course discussion forum on Discord
  4. Attend office hours
  5. Email me with specific questions
Have fun exploring one of the most important programs in AI history!

ELIZA may be nearly 60 years old, but it raises questions about intelligence, understanding, and human-computer interaction that remain deeply relevant today. By building and analyzing ELIZA, you're engaging with fundamental questions that will resurface throughout this course as we explore modern language models.