Skip to main content
chriscoppola.me
A close-up of the Claude website's landing screen at a slight angle: the headline 'Meet your thinking partner' sits above the dek 'Tackle any big, bold, bewildering challenge with Claude.', a 'How can I help you today?' prompt input with Write / Learn / Code buttons and an orange 'Ask Claude' submit button, and a hand-drawn white scribble next to an orange dot on the right.
Photo by Planet Volumes on Unsplash
Back to blog

Three Months on Claude Code: Switching from Copilot

Three months ago I swapped GitHub Copilot for Claude Code as my primary editor companion. The model quality, the surrounding apps, and the pricing math all point the same direction.

For roughly the past year my main editor companion at work was GitHub Copilot. It was good enough — inline completions that landed often enough to be useful, a chat panel that could explain unfamiliar code, the usual. Three months ago I switched to Claude Code as my daily driver and have not gone back. The switch was not driven by one feature. It was driven by three trends moving in the same direction at the same time: model quality, surrounding apps, and pricing.

Model quality

The first time I noticed the gap was on the kind of task Copilot is supposed to be good at — finishing a half-typed function. Claude got the intent right on the first try more often, kept variable names consistent with the surrounding file, and stopped trying to autocomplete the next three things I had not asked for. On harder tasks — refactors, multi-file changes, debugging — the gap widened. Copilot would propose an edit; Claude would propose an edit and then check whether it actually compiled.

The agentic loop is where the gap really opens up. Claude Code runs as a CLI agent: I describe what I want, it reads files, edits files, runs commands, and reports back. It is a junior engineer who can take a written-up task, do the thing, and tell me when the test suite passes — not a fancier autocomplete. Copilot has a chat-agent mode too, but the gap in what it could actually finish without supervision was large enough that I stopped reaching for it on anything bigger than a single function.

Screenshot of a Claude Code terminal session. Header reads 'Claude Code v2.1.138, Sonnet 4.6, Claude Max, ~/projects/powershell-profile'. Below the prompt 'let's do a code analysis and fix some issues', the agent responds with 'Let me explore the project structure first', then a 'Searching for 1 pattern, reading 9 files…' status line and a 'Tinkering…' progress indicator with elapsed time and token counts.
Claude Code at the start of a session. The terminal shows the version banner, the active model (Sonnet 4.6 on the Claude Max plan), and the project path. After a one-line prompt — "let's do a code analysis and fix some issues" — the agent acknowledges, decides to explore the project structure, and begins searching the repository on its own. The status line at the bottom shows tool calls landing in real time, including a tip that you can ask side questions without interrupting the current work.

The apps around the model

Choosing an AI assistant is choosing an ecosystem, not just a model. Anthropic ships Claude Code (the CLI), claude.ai (the chat app), mobile and iPad apps, IDE extensions for VS Code and JetBrains, and an API that all share conversation history and project context. Open a thread on my phone over morning coffee, pick it up on my laptop two hours later in the terminal. Same model, same context, same memory.

Copilot is tied to GitHub and to Microsoft's editor surface area. That is fine if you live entirely in VS Code, but I do not — I bounce between terminal, editor, browser, and phone, and Copilot only really lives in one of those four. The friction shows up in small ways. A question I want to ask away from the editor goes to a different chat tool. A long-running task I kicked off on my laptop is invisible from anywhere else. The model might be the same product, but the surface area is not.

Pricing

The trigger for the switch was a pricing announcement. Copilot is moving from a flat per-seat model to per-token billing for advanced features, without the volume discounts the underlying API has. On the kind of workload an agent does — long context, lots of tool calls, multi-step refactors — the per-token math gets ugly fast. The "$10 a month for unlimited completions" deal that made Copilot the default is being quietly unwound.

Claude's Max plan is the same shape Copilot used to be: a predictable monthly subscription with generous usage limits, and the limit budget is shared across Claude Code, the web app, and the mobile app. I do not have to think about token meters mid-session. When I switched, my effective rate dropped and my usage went up — exactly the opposite of where Copilot is heading.

What I lean on, three months in

Most of the leverage in Claude Code comes from three primitives the CLI exposes: hooks, skills, and plan mode. Hooks are shell commands the harness runs around tool calls — a Stop hook that runs the type-checker before the agent can hand back control, a PreToolUse hook that vetoes any Bash command matching a secret pattern. Skills are folders with workflow instructions that the model loads on demand; mine handle commit-message formatting, branch naming, security audits, and the codebase-init pass on new repos. Plan mode is a read-only audit phase for any change I cannot easily undo: the model produces a plan, I approve it, then execution starts.

Screenshot of Claude Code in plan mode. The terminal shows a rendered plan (a parody MVP pitch with a Stack table, Team list, Budget, and a numbered Verification section), followed by the prompt 'Claude has written up a plan and is ready to execute. Would you like to proceed?' and four numbered options for how to proceed. The footer notes the plan file path under ~/.claude/plans/ and a 'ctrl-g to edit in VS Code' hint.
Plan mode at the approval step. The agent has finished researching, written the plan to a file under `~/.claude/plans/`, and is asking how to proceed. Four options: auto-accept every edit, manually approve each one, refine the plan further via Ultraplan, or send freeform feedback to revise. Nothing has been written to the project yet — the plan is the contract.

Three months is not a long time, but it is long enough to feel the difference between an assistant that finishes my sentences and an assistant that finishes my tasks.

// keep reading

Get the next one in your inbox.

One email when something new goes up. No digests, no marketing, no tracking pixels.