Copilot Chat with GPT-5 and GPT-5-mini take super long time to response #172331
-
Select Topic AreaBug Copilot Feature AreaVS Code BodyHi, I have started my Copilot Pro subscription recently and I found that the GPT-5 and GPT-5-mini is extremely slow in Chat. According to my rough test, GPT-5 and GPT-5-mini spent more than 20s to answer the question 'How do you install homebrew on a brand new Mac'. Meanwhile in another context, GPT-4.1 only took 2s, and Claude Sonnet 4 took 3s. I'm pretty sure that my network connection is fine. This also happends on Github Mobile and Github Copilot page. Please help taking a look, and let me know if any more details are needed. Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 3 replies
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
-
|
Have you tested at different times of the day? GitHub notes that “Response times may vary during periods of high usage.” (reference) It’s likely that the issue is related to temporary congestion during peak hours. Sadly there's not much you can do about it other than using other models, I personally prefer GPT-4.1 or Sonnet 4. |
Beta Was this translation helpful? Give feedback.
-
|
I looked forward to use gpt 5, but I think I'll go back to 4.1. this is so borring, and coffee is expensive, when you can easily have a cup while waiting :-) |
Beta Was this translation helpful? Give feedback.
-
|
Hi there, Thanks for sharing the details. Right now, many users have noticed that GPT-5 and GPT-5-mini in Copilot Chat respond more slowly compared to GPT-4.1 or other models. The difference you’re seeing (20+ seconds vs. 2–3 seconds) isn’t usually caused by your network connection — it’s more about model size and the way requests are handled on GitHub’s servers. A few things you can try: If speed is more important than accuracy, stick with GPT-4.1 or Claude for now — both are noticeably faster. For longer queries, expect GPT-5 to take more time since it processes with larger context and more detailed reasoning. Keep an eye on GitHub’s Copilot changelog — performance improvements are actively being worked on, and rollouts may reduce latency over time. This isn’t something you can fix on your end, but reporting it (like you just did) is the right step. If it continues to be unusually slow, you could also open a support ticket with GitHub for them to check your account specifically. |
Beta Was this translation helpful? Give feedback.
-
|
I noticed that GPT5 reads way too many files (a lot of time unnecessary) for context and it slows down the response significantly. Most of the time I add to a prompt: "do not read other files for reference" if I only want it to review the current file, which usually work fine without outside references. |
Beta Was this translation helpful? Give feedback.
-
|
same problem |
Beta Was this translation helpful? Give feedback.
Thanks for the good idea. However it still have unnegligible latency for GPT 5 and 5 mini, but I gave up struggle with them, GPT 4.1 & Claude Sonnet 4 works fine. :)