GitHub Copilot Claude Opus 4.7 pricing not correct #192814
-
🏷️ Discussion TypeProduct Feedback 💬 Feature/Topic AreaCopilot in GitHub BodyClaude Opus 4.6 was $5 per million input/$25 per million output tokens. It is charged as 3 premium requests in GitHub Copilot. Now Claude Opus 4.7 is the same $5 per million input/$25 per million output tokens. However, it is being charged at 7.5 premium requests. It seems like GitHub is taking advantage of the new model to make extra money. |
Beta Was this translation helpful? Give feedback.
Replies: 20 comments 18 replies
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, and why is it only rolling out to pro+ and not pro too.... |
Beta Was this translation helpful? Give feedback.
-
|
Dear github copilot team. Please at least don't remove Opus 4.6. I get it, token cost does not have anything to do with the price you charge for a service. Theory of subjective value by Carl Menger. |
Beta Was this translation helpful? Give feedback.
-
|
The 7.5x multiplier is absolutely outrageous, the value has completely disappeared. And it's also supposed to be a promotional multiplier? Are you kidding me? |
Beta Was this translation helpful? Give feedback.
-
|
If they change the pricing like this, the only way is to just go back to Claude code. We have no other choice but Claude code. Then Claude increases the rate limit, then we come back here again, then they increase the price again, till small dev teams can't use AI anymore XD. |
Beta Was this translation helpful? Give feedback.
-
LoL? If this uses 30% more tokens, it should be 4x to be fair, not 7.5x |
Beta Was this translation helpful? Give feedback.
-
|
Dear github copilot team. Please don't be greedy and don't push away the loyal subscribers. It is literally look as scum for now. Even with tokenizer changes for opus 4.7, the price 7.5x for medium is way to high, not telling about 15x after promo ends. I already unsubscribed, and will look for other options, the only one reason to use a copilot was pricing per request and acceptable prices for flagman models of antropic. |
Beta Was this translation helpful? Give feedback.
-
|
Gemini 3.1 Pro seems to be the most capable model. However, in VS Code Chat it behaves like a complete idiot on agentic tasks. It appears to only reason and generate text, rather than execute actions effectively. Its agentic task capabilities need significant improvement. In Antigravity, it works more or less well |
Beta Was this translation helpful? Give feedback.
-
I dont understand why copilot is charging double for 4.7 while anthrpic kept price for opus 4.6 and 4.7 same. |
Beta Was this translation helpful? Give feedback.
-
|
I've been spending 200€ a months extra-premium requests with Opus 4.6 3X. I dont even want to try 4.7 at 7.5X. Going go back to claude code now. |
Beta Was this translation helpful? Give feedback.
-
|
Opus 4.6 was already expensive — I was spending around $200/month just for that model. Now that you've removed it, the only options left are Opus 4.7 at a 7.5x multiplier, or the older Opus 4.5 at 3x — the exact same cost per request as Opus 4.6, but with a significantly worse model. I've seen the technical argument floating around: Opus 4.7's new tokenizer maps the same input to up to 35% more tokens, and its higher-effort reasoning produces longer outputs. These are real factors. But let's do the math: 3x × 1.35 = ~4x. Even accounting for increased output tokens from deeper reasoning, we're still nowhere near 7.5x. The gap between what the technical changes justify and what GitHub is actually charging is hard to ignore. The core problem isn't just the price of Opus 4.7 — it's that Opus 4.6 has been removed entirely, with no like-for-like replacement. Users who were happy paying 3x for a capable model are now forced to either downgrade to Opus 4.5 (older model, same price) or jump to Opus 4.7 at 7.5x — a rate labeled "promotional until April 30" with no disclosure of what comes after. The Anthropic API pricing for Opus 4.7 is identical to Opus 4.6 ($5/M input, $25/M output), so this isn't purely a cost-driven decision. Removing the mid-tier option while offering no equivalent replacement is a business decision, not a technical necessity. My ask: keep Opus 4.6 available at 3x, or introduce Opus 4.7 at a comparable multiplier. Forcing users to choose between "worse and cheap" or "better and unaffordable" is not a fair trade-off for a paid Pro+ subscription. |
Beta Was this translation helpful? Give feedback.
-
|
So I cancelled my subscription, however doing research on these topics, I think they are forcing our hand towards the better cheaper models, because for 99% of work they perform the jobs equally or better, example look at gpt 5.4-mini on website design tasks/benchmarks. At this point based on my knowledge of context poisoning over context length, I believe gpt 5.4 at 400k is the worst choice between the top of the line 1x usage models, because of the lost-in-the-middle issues. If anyone has clear definitive benchmarks related to "how many cooperative ideas" an agent can hold in its mind simultaneously, I think its related to number of mha heads[guess], yes codeforce elo of opus4.7 is better, sure but what problems are you really using it on that demand that level of intelligence, my guess is that spending 1/7.5x the human time to gain marginal benefits by using a higher quality model is the worse decision then thinking through your own problem with critical thinking and first principles prompt engineering. This means you can't code up your SAAS app in 10 min but instead code up a better engineered more thought out more developed SAAS app in 75min, in human time, this is arbitrary, in compute credits / monetary terms, the difference is the bottom line. Plus spending time to think through the problems, makes us smarter, the training data higher quality, and improves the next models faster. Hoping for research in this area to be developed rapidly, but Im busy working on ordinality of super intelligence, so bigger transfinite levels of issues to worry about on my side of things, that I'm no longer choosing to rush through. I also had to cancel my subscription because Im a recluse no body with no job, so you know a sponsor would help me build safe superintelligence for everybody, just saying. |
Beta Was this translation helpful? Give feedback.
-
|
I get it if 4.7 costs more because it uses more tokens per request — that's a legitimate reason for a higher multiplier. But that doesn't explain removing 4.6. Anthropic's API pricing is identical for 4.5, 4.6, and 4.7. So keeping 4.6 available at its current 3× costs GitHub nothing. The only reason to remove it is to eliminate the cheaper option and force users onto 4.7 at 7.5×. Leaving Opus 4.5 at 3× as the 'budget' choice doesn't count — it's a clearly worse model, which is why nobody was using it. I was spending real money on Opus 4.6 at 3× every month and was happy to keep doing so. At 7.5× on 4.7, it's not viable for my workflow. If 4.6 isn't restored, I'll be requesting a refund under the May 20 window and moving to Claude Code, which bills at the actual API rate with no multiplier. The fix is simple: keep 4.6 in the picker at 3×. Let users choose. |
Beta Was this translation helpful? Give feedback.
-
|
I suspect that Opus 4.7's real-world cost is closer to the actual cost to run, and that everything before has been heavily subsidised. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Anyone noticed overbilling on premium? I asked 42 question since apr.22. and it says I took 1020 premium requests, when its 7.5 per request it would be 136. so it billed me 3x my usage. |
Beta Was this translation helpful? Give feedback.
-
|
The great thing about Copilot was that you could choose your model depending on the complexity of the problem at hand. With the latest changes, it essentially takes that choice away. Anthropic models (esp. Opus) are unreasonably more expensive compared to OpenAI models. I just cancelled my subscription. Still hoping for pricing adjustments in the future though. |
Beta Was this translation helpful? Give feedback.
-
|
Suprise :D @dehlers-cts @d4rky-pl @LTAcosta @christianarg @profix898 |
Beta Was this translation helpful? Give feedback.
-
|
Anthropic or any frontier AI lab should not release a model at all. if its not commercially viable. There is no point in forcibly keeping us away from opus 4.6, by forcibly rubbing opus 4.7 at double the cost of 4.6. Even if what they said is right about opus 4.7 that its better than 4.6, still we dont want 4.7 if its 2.5 times the cost of 4.6. |
Beta Was this translation helpful? Give feedback.
-
|
It's becoming impossible to use Opus, each request costs about 1.1% of the Pro+, wtf. |
Beta Was this translation helpful? Give feedback.



I found a logical reason why it will cost more. It looks like all coding tools are going to charge more. See below:
Why Opus 4.7 uses more premium requests than Opus 4.6
Two confirmed technical changes directly increase token consumption:
Anthropic states that Opus 4.7’s new tokenizer maps the same input to up to 35% more tokens, depending on content type.
More tokens in → more quota consumed.
Opus 4.7 “thinks more at higher effort levels,” especially in agentic workflows, which increases output token usage.
This means longer, deeper responses → more quota burned.