Chapter 13 · From Claude 1 to Opus 4.7

A focused timeline of the model family most of this handbook is addressed to.


Anthropic was founded in mid-2021 by Dario and Daniela Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Chris Olah, Jack Clark, and others who had led much of the GPT-3 effort at OpenAI. Their founding thesis: frontier AI must be built by people doing serious safety research — because if powerful systems are coming regardless, you want the builders and the safety researchers to be the same people.

Claude is the outward-facing product of that thesis. This chapter is a tight history, ending at the model probably generating the pages in front of you.

In plain English. Where OpenAI's public story is "build it and ship it," Anthropic's story is "build it carefully, and publish what you learn about why it's safe." The engineering shows up in both.

The three pillars

mindmap
  root((The Claude line))
    Safety
      Constitutional AI
      Responsible Scaling Policy
      Interpretability research
      Refusal calibration
    Capability
      Long context (100k -> 1M+)
      Multimodality
      Reasoning / extended thinking
      Coding leadership
    Product
      Claude.ai chat
      API + SDKs
      Claude Code CLI
      Claude in Chrome
      Claude in Excel
      Cowork desktop
      Agent SDK

13.1 The timeline at a glance

timeline
    title The Claude family
    2023 Mar : Claude 1 - Constitutional AI
    2023 Jul : Claude 2 - 100k context
    2023 Nov : Claude 2.1 - 200k, lower hallucinations
    2024 Mar : Claude 3 (Haiku/Sonnet/Opus)
    2024 Jun : Claude 3.5 Sonnet + Artifacts
    2024 Oct : 3.5 Sonnet new + 3.5 Haiku + Computer Use
    2025 Feb : Claude 3.7 - Extended thinking
    2025 May : Claude 4 family (Opus 4, Sonnet 4)
    2025 Sep : Claude 4.5 Sonnet / Haiku
    2025 Q4  : Opus 4.6
    2026 Q2  : Opus 4.7 (today)

13.2 Claude 1 (March 2023): Constitutional AI debuts

Claude 1 shipped with 9k of context and an API-only interface. Its distinguishing trick was Constitutional AI (CAI): instead of teaching the model to be helpful and safe only through human feedback, Anthropic wrote a constitution of principles and trained the model to critique and revise its own outputs against those principles.

flowchart TB
    A[SFT model] --> G[Generate response]
    G --> CR[Self-critique
against constitution] CR --> R[Revise response] R --> D[Dataset of revised responses] D --> F[Fine-tune] F --> A2[Aligned model]

The payoff was practical: you could adjust behavior by editing prose rules, not by rehiring a thousand raters. Every later Claude model's behavior has been shaped by evolving versions of this constitution.

13.3 Claude 2 (July 2023): the context-length leap

Claude 2 shipped with a 100,000-token context window — a first at the time, about 10× the frontier competition. For the first time, you could paste an entire book into a chat and ask about it.

This was more than a marketing number. It changed what products were possible:

Claude 2.1 (November 2023) doubled the window again (200k) and materially reduced hallucination rates.

13.4 Claude 3 (March 2024): the three-tier strategy

Claude 3 introduced the Haiku / Sonnet / Opus tiered naming that other vendors later copied. Each tier serves a different latency/cost/capability sweet spot:

Tier Meant for Typical use
Haiku Cheapest, fastest High-volume classification, extraction, routing
Sonnet Balanced Default workhorse for most apps
Opus Most capable, most expensive Hardest reasoning, agents, frontier work

Claude 3 also introduced vision — image understanding with text. Combined with long context, it was the first model that felt like it could "read" a PDF the way a human reads a PDF: diagrams, tables, handwriting and all.

flowchart LR
    subgraph Claude 3 family
    H[Haiku] --> S[Sonnet] --> O[Opus]
    end
    H -.fast/cheap.-> S
    S -.balanced.-> O
    O -.max capability.-> end1[agents, hard reasoning]
    H -.high volume.-> end2[classification, extraction]
    S -.most apps.-> end3[chat, RAG, copilots]

13.5 Claude 3.5 Sonnet (June 2024): the coding leap

Claude 3.5 Sonnet was, by several benchmarks, the first non-frontier-name ("Opus") model to match or beat the frontier. It also brought two product-level shifts:

13.6 3.5 Sonnet (new), 3.5 Haiku, and Computer Use (October 2024)

In October 2024, Anthropic shipped three things on the same day:

  1. An upgraded Claude 3.5 Sonnet (informally "new Sonnet") — another coding and tool-use step.
  2. Claude 3.5 Haiku — cheap and near-3-Opus quality on many tasks.
  3. Computer Use (beta) — the model could take screenshots and control mouse/keyboard on a virtual desktop.

Computer Use was an early, rough capability — slow, brittle on real websites, and easy to mock. It was also the first clear signal that general-purpose autonomous agents driving real UIs were a matter of months, not years.

13.7 Claude 3.7 (February 2025): extended thinking

3.7 Sonnet added extended thinking — the Claude equivalent of OpenAI's o1 pattern. The model could spend thousands of hidden tokens reasoning before answering, configurable per request. This unified two previously separate model families (chat vs reasoning) into one model with a dial.

flowchart LR
    Q[Prompt] --> M[Claude 3.7+]
    M -->|thinking = 0| A1[Fast chat answer]
    M -->|thinking = low| A2[Short CoT]
    M -->|thinking = high| A3[Deep reasoning]

13.8 Claude 4 family (May 2025)

Claude 4 — Opus 4 and Sonnet 4 — was the first release specifically framed around long-horizon agentic tasks. Anthropic reported successful autonomous runs of hours, with the model planning, executing, self-correcting, and completing substantial engineering tickets without intervention.

Other notable shifts:

13.9 4.5 family, Opus 4.6, and today

Through 2025, Anthropic shipped a steady cadence:

13.10 What's distinctive about the Claude line

Five recurring themes:

  1. Constitutional AI. Behavior is written, readable, and editable — a competitive edge for enterprise customers who care about why a model refused or accepted.
  2. Long context, reliably used. Claude has consistently led on "needle in a haystack" and long-document comprehension.
  3. Coding focus. Starting with 3.5 Sonnet, every release has explicitly pushed coding. In 2026 it's the default coding assistant for many teams.
  4. Agentic reliability. The 4.x line is built for multi-hour autonomy — durable planning, recovery, and verification.
  5. Safety as shipping substance. Responsible Scaling Policies, interpretability research, and deployment guardrails are part of the release notes, not a separate blog.

13.11 The Claude product surface (2026)

As of this handbook's writing, Claude is accessible via:

Put together, these are the main ways most of this handbook's "things I can do with an AI" actually get done.

13.12 Why Claude is the daily driver for many devs in 2026

This is an opinion chapter in a mostly-neutral handbook. Honestly:

Competitors are excellent and catching up constantly. The right default in 2027 may be different. In April 2026, Opus 4.7 is the one I'd give a working engineer without caveats.

Further reading & watching