Explore the building blocks of modern language models through hands-on, interactive visualizations. From basic tokenization to advanced transformer architectures.
Experience the historic ELIZA chatbot from 1966. Explore pattern matching, rule-based responses, and the foundations of conversational AI.
Launch Demo →Journey through 60 years of chatbot history from ELIZA (1966) to GPT. Chat with implementations from each era.
Launch Demo →Visualize how text is broken down into tokens. Compare different tokenization strategies and understand their impact on language models.
Analyze sentence structure with Part-of-Speech tagging and dependency parsing. Visualize parse trees and grammatical relationships.
Analyze sentiment in movie reviews using multiple models. Explore the IMDB dataset and visualize prediction confidence and feature contributions.
Benchmark multiple embedding models on similarity, analogy, and categorization tasks. Compare speed vs quality trade-offs.
Launch Demo →Discover hidden topics in documents using Latent Dirichlet Allocation (LDA). Explore 3000 Wikipedia articles and visualize topic distributions.
Solve word analogies using vector arithmetic. Explore relationships like "king - man + woman = queen" in embedding space.
Interactive 3D visualization of word embeddings. Explore semantic relationships and discover how words cluster in vector space.
Compare semantic, keyword (BM25), and hybrid search methods on the same corpus. See how embeddings improve search relevance.
Launch Demo →Visualize how attention mechanisms work. See how models learn to focus on relevant parts of the input sequence.
Launch Demo →Step through the transformer architecture layer by layer. Understand encoder-decoder interactions and multi-head attention.
Launch Demo →Experiment with BERT's masked language modeling. See how BERT predicts masked words using bidirectional context.
Launch Demo →Experiment with a mini GPT model. Adjust parameters, see predictions, and understand autoregressive text generation.
Launch Demo →Build a retrieval-augmented generation system. Query a corpus of 2000 Wikipedia articles and see how context enhances responses.
Try adjusting your search terms or clearing the filters.