[2025-10-22] Incident Thread #177704
-
❗ An incident has been declared:Incident with API Requests Subscribe to this Discussion for updates on this incident. Please upvote or emoji react instead of commenting +1 on the Discussion to avoid overwhelming the thread. Any account guidance specific to this incident will be shared in thread and on the Incident Status Page. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments
-
UpdateSome users may see slow, timing out requests or not found when browsing repos. We have identified slowness in our platform and are investigating. |
Beta Was this translation helpful? Give feedback.
-
UpdateWe have identified a possible source of the issue and there is currently no user impact but we are continuing to investigate and will not resolve this incident until we have more confidence in our mitigations and investigation results. |
Beta Was this translation helpful? Give feedback.
-
Incident ResolvedThis incident has been resolved. |
Beta Was this translation helpful? Give feedback.
-
Incident SummaryOn October 22, 2025, between 14:06 UTC and 15:17 UTC, less than 0.5% of web users experienced intermittent slow page loads on GitHub.com. During this time, API requests showed increased latency, with up to 2% timing out. The issue was caused by elevated loads on one of our databases caused by a poorly performing query, which impacted performance for a subset of requests. We identified the source of the load and optimized the query to restore normal performance. We’ve added monitors for early detection for query performance, and we continue to monitor the system closely to ensure ongoing stability. |
Beta Was this translation helpful? Give feedback.

Incident Summary
On October 22, 2025, between 14:06 UTC and 15:17 UTC, less than 0.5% of web users experienced intermittent slow page loads on GitHub.com. During this time, API requests showed increased latency, with up to 2% timing out.
The issue was caused by elevated loads on one of our databases caused by a poorly performing query, which impacted performance for a subset of requests.
We identified the source of the load and optimized the query to restore normal performance. We’ve added monitors for early detection for query performance, and we continue to monitor the system closely to ensure ongoing stability.