/
A single chronological feed, newest first. Each card links out to the official blog or paper when we have one, and carries a short description, modality, open- vs closed-weight signal, and known variants.
Manifesto
AI is accelerating at breakneck speed. Every week, a new wave of models drops from labs around the world — and keeping track of it all is nearly impossible. This site is one long answer to a simple question: what happened, when, and who shipped it?
This timeline covers major LLM and related model releases from the Transformer era to today — so you can visualize how fast we're moving. Each row ties a model to its lab, release window, modality, licensing posture, and (when available) variants and scale.
The companion graph turns the same dataset into swimlanes, growth curves, density maps, and more — for when you want patterns, not a scroll.
Built to earn the bookmark: minimal by design, fast by default, and always worth returning to.
/
A single chronological feed, newest first. Each card links out to the official blog or paper when we have one, and carries a short description, modality, open- vs closed-weight signal, and known variants.
/snapshot
The timeline data, crunched into numbers — how many models shipped, who's releasing the most, which months get crowded, open vs closed splits, and a bunch of calendar oddities you wouldn't notice just scrolling.
/graph
The same records, reorganized for comparison: release cadence, parameter scale, company share, calendar heatmaps, and where each lab sits geographically.
Filter by open or closed weights, modality, and year. Search matches lab or model names.
Use each entry's link to read the primary announcement — the timeline is the index, not the archive.
Jump to Graph for aggregates and visual comparisons, or Snapshot for stats, streaks, and calendar oddities — same data, different lenses.
I wrote one detailed prompt, dropped it into v0 with Claude Opus 4.6, and it gave me a near-complete working prototype — proper structure, functional layout, the whole thing. From there it was mostly improvements and modifications to get it where I wanted.
From there I moved into Cursor, picking whichever frontier model made sense for the job at hand. Honestly, most of what you see here was generated or heavily assisted by AI — and that's intentional. Good prompts, the right model, a solid IDE — you can get UI that feels polished and ready to ship. The real work is in steering the thing, not pretending every line was hand-typed from scratch.
Alongside Cursor, I started using Amp CLI for a lot of the work — features, refactors, bug fixes, planning, all of it. I kept switching between the two depending on what felt right for the task. Having a terminal-first agent in the mix made it easy to jump in, steer quick changes, and move on.
I collected model names, release dates, and relevant links — blog posts, docs, papers, HuggingFace pages, GitHub repos — using a mix of Grok search and Gemini search. They're surprisingly good at digging up accurate release info. The recent ones I mostly already knew. Some came straight from official AI lab pages like API docs and pricing pages.
For each model's description, I gave Claude and Gemini focused instructions along with the context I'd gathered — the name, release date, all the links. They'd pull from those sources via web search and write up a description. It's all done manually at the moment, one model at a time.
This whole flow could be an AI agent with some kind of review/verify step — and that's the plan for future releases so new models can be added automatically. For now, it's manual but it works.
“Other catalogs exist — some narrower, some broader, some optimized for benchmarks rather than narrative history. This project sits in the middle: dense enough to browse, shallow enough per row that you can still skim.”
An interactive tree view of AI model naming across labs — exposing every skipped version, weird suffix, and rebrand in a collapsible file-tree structure.
A massive spreadsheet-style catalog of 1000+ LLMs with parameter counts, benchmark scores, training data, and release dates — maintained by Dr Alan D. Thompson.
A similar chronological timeline of LLM releases from the Transformer era to today — fully AI-agent generated, covers the basics but lacks graphs and deeper data.
A curated timeline spanning 2015–2026, telling the story of the last decade in AI — from cultural trends to technical advancements, with each event linking to source material.