For roughly the past year my main editor companion at work was GitHub Copilot. It was good enough — inline completions that landed often enough to be useful, a chat panel that could explain unfamiliar code, the usual. Three months ago I switched to Claude Code as my daily driver and have not gone back. The switch was not driven by one feature. It was driven by three trends moving in the same direction at the same time: model quality, surrounding apps, and pricing.
Model quality
The first time I noticed the gap was on the kind of task Copilot is supposed to be good at — finishing a half-typed function. Claude got the intent right on the first try more often, kept variable names consistent with the surrounding file, and stopped trying to autocomplete the next three things I had not asked for. On harder tasks — refactors, multi-file changes, debugging — the gap widened. Copilot would propose an edit; Claude would propose an edit and then check whether it actually compiled.
The agentic loop is where the gap really opens up. Claude Code runs as a CLI agent: I describe what I want, it reads files, edits files, runs commands, and reports back. It is a junior engineer who can take a written-up task, do the thing, and tell me when the test suite passes — not a fancier autocomplete. Copilot has a chat-agent mode too, but the gap in what it could actually finish without supervision was large enough that I stopped reaching for it on anything bigger than a single function.

The apps around the model
Choosing an AI assistant is choosing an ecosystem, not just a model. Anthropic ships Claude Code (the CLI), claude.ai (the chat app), mobile and iPad apps, IDE extensions for VS Code and JetBrains, and an API that all share conversation history and project context. Open a thread on my phone over morning coffee, pick it up on my laptop two hours later in the terminal. Same model, same context, same memory.
Copilot is tied to GitHub and to Microsoft's editor surface area. That is fine if you live entirely in VS Code, but I do not — I bounce between terminal, editor, browser, and phone, and Copilot only really lives in one of those four. The friction shows up in small ways. A question I want to ask away from the editor goes to a different chat tool. A long-running task I kicked off on my laptop is invisible from anywhere else. The model might be the same product, but the surface area is not.
Pricing
The trigger for the switch was a pricing announcement. Copilot is moving from a flat per-seat model to per-token billing for advanced features, without the volume discounts the underlying API has. On the kind of workload an agent does — long context, lots of tool calls, multi-step refactors — the per-token math gets ugly fast. The "$10 a month for unlimited completions" deal that made Copilot the default is being quietly unwound.
Claude's Max plan is the same shape Copilot used to be: a predictable monthly subscription with generous usage limits, and the limit budget is shared across Claude Code, the web app, and the mobile app. I do not have to think about token meters mid-session. When I switched, my effective rate dropped and my usage went up — exactly the opposite of where Copilot is heading.
What I lean on, three months in
Most of the leverage in Claude Code comes from three primitives the CLI exposes: hooks, skills, and plan mode. Hooks are shell commands the harness runs around tool calls — a Stop hook that runs the type-checker before the agent can hand back control, a PreToolUse hook that vetoes any Bash command matching a secret pattern. Skills are folders with workflow instructions that the model loads on demand; mine handle commit-message formatting, branch naming, security audits, and the codebase-init pass on new repos. Plan mode is a read-only audit phase for any change I cannot easily undo: the model produces a plan, I approve it, then execution starts.

Three months is not a long time, but it is long enough to feel the difference between an assistant that finishes my sentences and an assistant that finishes my tasks.
