tag:www.githubstatus.com,2005:/history GitHub Status - Incident History 2025-10-02T17:13:18Z GitHub tag:www.githubstatus.com,2005:Incident/26610255 2025-10-01T16:55:59Z 2025-10-01T16:55:59Z Degraded Performance for GitHub Actions MacOS Runners <p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:55</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:27</var> UTC</small><br><strong>Update</strong> - We are seeing some recovery for image queueing and continuing to monitor.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>14:41</var> UTC</small><br><strong>Update</strong> - We are continuing work to restore capacity for our MacOS ARM runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>13:58</var> UTC</small><br><strong>Update</strong> - Our team continues to work hard on restoring capacity for the Mac runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>13:12</var> UTC</small><br><strong>Update</strong> - Work continues on restoring capacity on the Mac runners.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>12:32</var> UTC</small><br><strong>Update</strong> - MacOS ARM runners continue to be at reduced capacity, causing queuing of jobs. Investigation is ongoing.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>11:51</var> UTC</small><br><strong>Update</strong> - Work continues to bring the full runner capacity back online. Resources are focused on improving the recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>11:11</var> UTC</small><br><strong>Update</strong> - We are continuing to see recovery of some runner capacity and investigating slow recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>10:30</var> UTC</small><br><strong>Update</strong> - We are seeing recovery of some runner capacity, while also investigating slow recovery of certain runner types.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>09:44</var> UTC</small><br><strong>Update</strong> - MacOS runners are coming back online and starting to process queued work.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:59</var> UTC</small><br><strong>Update</strong> - We are continuing to deploy the necessary changes to restore MacOS runner capacity.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:27</var> UTC</small><br><strong>Update</strong> - We have identified the cause and are deploying a change to restore MacOS runner capacity.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:17</var> UTC</small><br><strong>Update</strong> - Customers using GitHub Actions Mac OS runners are experiencing job start delays and failures. We are aware of this issue and actively investigating.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>08:09</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>07:59</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26615316 2025-10-02T17:13:18Z 2025-10-02T17:13:18Z Degraded Gemini 2.5 Pro experience in Copilot <p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>17:13</var> UTC</small><br><strong>Update</strong> - The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.</p><p><small>Oct <var data-var='date'> 2</var>, <var data-var='time'>02:52</var> UTC</small><br><strong>Update</strong> - We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>18:16</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>17:37</var> UTC</small><br><strong>Update</strong> - We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:49</var> UTC</small><br><strong>Update</strong> - We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Oct <var data-var='date'> 1</var>, <var data-var='time'>16:43</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26592339 2025-09-29T19:12:41Z 2025-09-29T19:12:41Z Disruption with Gemini 2.5 Pro and Gemini 2.0 Flash in Copilot <p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Update</strong> - The upstream model provided has resolved the issue and we are seeing full availability for Gemini 2.5 Pro and Gemini 2.0 Flash.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:40</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Gemini 2.5 Pro & Gemini 2.0 Flash models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>18:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26591278 2025-09-29T17:33:51Z 2025-09-29T17:33:51Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:33</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>17:28</var> UTC</small><br><strong>Update</strong> - Customers are getting 404 responses when connecting to the GitHub MCP server. We have reverted a change we believe is contributing to the impact, and are seeing resolution in deployed environments.</p><p><small>Sep <var data-var='date'>29</var>, <var data-var='time'>16:45</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26554560 2025-09-25T17:36:18Z 2025-09-29T22:28:29Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:36</var> UTC</small><br><strong>Resolved</strong> - On September 26, 2025 between 16:22 UTC and 18:32 UTC raw file access was degraded for a small set of four repositories. On average, raw file access error rate was 0.01% and peaked at 0.16% of requests. This was due to a caching bug exposed by excessive traffic to a handful of repositories. <br /><br />We mitigated the incident by resetting the state of the cache for raw file access and are working to improve cache usage and testing to prevent issues like this in the future.<br /></p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:06</var> UTC</small><br><strong>Update</strong> - We are seeing issues related to our ability to serve raw file access across a small percentage of our requests.</p><p><small>Sep <var data-var='date'>25</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26542071 2025-09-24T15:36:09Z 2025-09-29T17:34:12Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>15:36</var> UTC</small><br><strong>Resolved</strong> - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.<br /><br />We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>14:55</var> UTC</small><br><strong>Update</strong> - We are seeing delays in email delivery, which is impacting notifications and user signup email verification. We are investigating and working on mitigation.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26538990 2025-09-24T09:18:30Z 2025-09-29T15:48:45Z Claude Opus 4 is experiencing degraded performance <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:18</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:16</var> UTC</small><br><strong>Update</strong> - Between around 8:16 UTC and 8:51 UTC we saw elevated errors on Claude Opus 4 and Opus 4.1, up to 49% of requests were failing. This has recovered to around 4% of requests failing, we are monitoring recovery.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>09:08</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26534607 2025-09-24T00:26:29Z 2025-10-01T21:21:18Z Incident with Copilot <p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Resolved</strong> - Between 20:06 UTC September 23 and 04:58 UTC September 24, 2025, the Copilot service experienced degraded availability for Claude Sonnet 4 and 3.7 model requests.<br /><br />During this period, 0.46% of Claude 4 requests and 7.83% of Claude 3.7 requests failed.<br /><br />The reduced availability resulted from Copilot disabling routing to an upstream provider that was experiencing issues and reallocating capacity to other providers to manage requests for Claude Sonnet 3.7 and 4.<br />We are continuing to investigate the source of the issues with this provider and will provide an update as more information becomes available.</p><p><small>Sep <var data-var='date'>24</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 and Claude Sonnet 4 are once again available in Copilot Chat, VS Code and other Copilot products.<br /><br />We will continue monitoring to ensure stability, but mitigation is complete.<br /></p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Claude Sonnet 3.7 and Claude Sonnet 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>22:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> tag:www.githubstatus.com,2005:Incident/26532100 2025-09-23T17:41:57Z 2025-09-24T17:37:48Z Incident with Pages and Actions <p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:41</var> UTC</small><br><strong>Resolved</strong> - On September 23, between 17:11 and 17:40 UTC, customers experienced failures and delays when running workflows on GitHub Actions and building or deploying GitHub Pages. The issue was caused by a faulty configuration change that disrupted service to service communication in GitHub Actions. During this period, in-progress jobs were delayed and new jobs would not start due to a failure to acquire runners, and about 30% of all jobs failed. GitHub Pages users were unable to build or deploy their Pages during this period.<br /><br />The offending change was rolled back within 15 minutes of its deployment, after which Actions workflows and Pages deployments began to succeed. Actions customers continued to experience delays for about 15 minutes after the rollback was completed while services worked through the backlog of queued jobs. We are planning to implement additional rollout checks to help detect and prevent similar issues in the future.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:33</var> UTC</small><br><strong>Update</strong> - We are investigating delays in Actions Workflows.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Pages</p> tag:www.githubstatus.com,2005:Incident/26531614 2025-09-23T17:40:25Z 2025-09-29T17:33:15Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>17:40</var> UTC</small><br><strong>Resolved</strong> - On September 23, 2025, between 15:29 UTC and 17:38 UTC and also on September 24, 2025 between 15:02 UTC and 15:12, email deliveries were delayed up to 50 minutes which resulted in significant delays for most types of email notifications. This occurred due to an unusually high volume of traffic which caused resource contention on some of our outbound email servers.<br /><br />We have updated the configuration we use to better allocate capacity when there is a high volume of traffic and are also updating our monitors so we can detect this type of issue before it becomes a customer impacting incident.<br /></p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>16:50</var> UTC</small><br><strong>Update</strong> - We're seeing delays related to outbound emails and are investigating.</p><p><small>Sep <var data-var='date'>23</var>, <var data-var='time'>16:46</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26473036 2025-09-17T17:55:39Z 2025-09-19T21:16:04Z Incident with Codespaces <p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Resolved</strong> - On September 17, 2025 between 13:23 and 16:51 UTC some users in West Europe experienced issues with Codespaces that had shut down due to network disconnections and subsequently failed to restart. Codespace creations and resumes were failed over to another region at 15:01 UTC. While many of the impacted instances self-recovered after mitigation efforts, approximately 2,000 codespaces remained stuck in a "shutting down" state while the team evaluated possible methods to recover unpushed data from the latest active session of affected codespaces. Unfortunately, recovery of that data was not possible. We unblocked shutdown of those codespaces, with all instances either shut down or available by 8:26 UTC on September 19.<br /><br />The disconnects were triggered by an exhaustion of resources in the network relay infrastructure in that region, but the lack of self-recovery was caused by an unhandled error impacting the local agent, which led to an unclean shutdown.<br /><br />We are improving the resilience of the local agent to disconnect events to ensure shutdown of codespaces is always clean without data loss. We have also addressed the exhausted resources in the network relay and will be investing in improved detection and resilience to reduce the impact of similar events in the future.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Update</strong> - We have confirmed the original mitigation to failover has resolved the issue causing Codespaces to become unavailable. We are evaluating if there is a path to recover unpushed data from the approximately 2000 Codespaces that are currently in the shutting down state. We will be resolving this incident and will detail the next steps in our public summary.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>16:51</var> UTC</small><br><strong>Update</strong> - For Codespaces that were stuck in the shutting down state and have been resumed, we've identified an issue that is causing the contents Codespace to be irrecoverably lost which has impacted approximately 250 Codespaces. We are actively working on a mitigation to prevent any more Codespaces currently in this state from being forced to shut down to prevent the potential data loss.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Update</strong> - We're continuing to see improvement with Codespaces that were stuck in in the shutting down state and we anticipate the remaining should self resolve in about an hour.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>15:31</var> UTC</small><br><strong>Update</strong> - Some users with Codespaces in West Europe were unable to connect to Codespaces, we have failed over that region and users should be able to create new Codespaces. If a user has a Codespace in a shutting down state, we are still investigating potential fixes and mitigations.</p><p><small>Sep <var data-var='date'>17</var>, <var data-var='time'>15:04</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Codespaces</p> tag:www.githubstatus.com,2005:Incident/26462594 2025-09-16T18:30:08Z 2025-09-16T18:30:09Z Unauthenticated LFS requests for public repos are returning unexpected 401 errors <p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:30</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:29</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue and are monitoring the results</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>18:02</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Update</strong> - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for LFS requests. We're in the process of fixing the change, but in the interim retrying should eventually succeed.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:55</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26462194 2025-09-16T17:45:22Z 2025-09-19T18:21:23Z Creating GitHub apps using the REST API will fail with a 401 error <p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:45</var> UTC</small><br><strong>Resolved</strong> - Between 16:26 UTC on September 15th and 18:30 UTC on September 16th, anonymous REST API calls to approximately 20 endpoints were incorrectly rejected because they were not authenticated. While this caused unauthenticated requests to be rejected by these endpoints, all authenticated requests were unaffected, and no protected endpoints were exposed.<br /><br />This resulted in 100% of requests to these endpoints failing at peak, representing less than 0.1% of GitHub’s overall request volume. On average, the error rate for these endpoints was less than 50% and peaked at 100% for about 26 hours over September 16th. API requests to the impacted endpoints were rejected with a 401 error code. This was due to a mismatch in authentication policies, for specific endpoints, during a system migration.<br /><br />The failure to detect the errors was the result of the issue occurring for a low percentage of traffic.<br /><br />We mitigated the incident by reverting the policy in question, and correcting the logic associated with the degraded endpoints. We are working to improve our test suite to further validate mismatches, and refining our monitors for proactive detection.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:27</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue and are monitoring the results</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:15</var> UTC</small><br><strong>Update</strong> - A recent change to our API routing inadvertently added an authentication requirement to the anonymous route for creating GitHub apps. We're in the process of fixing the change, but in the interim retrying should eventually succeed.</p><p><small>Sep <var data-var='date'>16</var>, <var data-var='time'>17:14</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26430076 2025-09-15T21:01:03Z 2025-09-18T18:29:33Z Repository search is degraded <p><small>Sep <var data-var='date'>15</var>, <var data-var='time'>21:01</var> UTC</small><br><strong>Resolved</strong> - At around 18:45 UTC on Friday, September 12, 2025, a change was deployed that unintentionally affected search index management. As a result, approximately 25% of repositories were temporarily missing from search results.<br /><br />By 12:45 UTC on Saturday, September 14, most missing repositories were restored from an earlier search index snapshot, and repositories updated between the snapshot and the restoration were reindexed. This backfill was completed at 21:25 UTC.<br /><br />After these repairs, about 98.5% of repositories were once again searchable. We are performing a full reconciliation of the search index and customers can expect to see records being updated and content becoming searchable for all repos again between now and Sept 25.<br /><br />NOTE: Users who notice missing or outdated repositories in search results can force reindexing by starring or un-starring the repository. Other repository actions such as adding topics, or updating the repository description, will also result in reindexing. In general, changes to searchable artifacts in GitHub will also update their respective search index in near-real time.<br /><br />User impact has been mitigated with the exception of the 1.5% of repos that are missing from the search index. The change responsible for the search issue has been reverted, and full reconciliation of the search index is underway, expected to complete by September 23. We have added additional checks to our indexing model to ensure this failure does not happen again. We are also investigating faster repair alternatives.<br /><br />To avoid resource contention and possible further issues we are currently not repairing repositories or organizations individually at this time. No repository data was lost, and other search types were not affected.</p><p><small>Sep <var data-var='date'>13</var>, <var data-var='time'>22:39</var> UTC</small><br><strong>Update</strong> - Most searchable repositories should again be visible in search results. Up to 1.5% of repositories may still be missing from search results.<br /><br />Many different actions synchronize the repository state with the search index, so we expect natural recovery for repositories that see more frequent user and API-driven interactions. <br /><br />A complete index reconciliation is underway to restore stagnant repositories that were deleted from the index. We will update again once we have a clear timeline of when we expect full recovery for those missing search results.</p><p><small>Sep <var data-var='date'>13</var>, <var data-var='time'>12:49</var> UTC</small><br><strong>Update</strong> - Customers are not seeing repositories they expect to see in search results. We have restored a snapshot of this search index from Fri 12 Sep at 21:00 UTC. Changes made since then will be unavailable while we work to backfill the rest of the search index. Any new changes will be available in near-real time as expected.</p><p><small>Sep <var data-var='date'>13</var>, <var data-var='time'>12:44</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26450306 2025-09-15T18:28:36Z 2025-09-18T23:23:30Z Disruption with some GitHub services <p><small>Sep <var data-var='date'>15</var>, <var data-var='time'>18:28</var> UTC</small><br><strong>Resolved</strong> - On September 15th between 17:55 and 18:20 UTC, Copilot experienced degraded availability for all features. This was due a partial deployment of a feature flag to a global rate limiter. The flag triggered behavior that unintentionally rate limited all requests, resulting in 100% of them returning 403 errors. The issue was resolved by reverting the feature flag which resulted in immediate recovery.<br /><br />The root cause of the incident was from an undetected edge case in our rate limiting logic. The flag was meant to scale down rate limiting for a subset of users, but unintentionally put our rate limiting configuration into an invalid state.<br /><br />To prevent this from happening again, we have addressed the bug with our rate limiting. We are also adding additional monitors to detect anomalies in our traffic patterns, which will allow us to identify similar issues during future deployments. Furthermore, we are exploring ways to test our rate limit scaling in our internal environment to enhance our pre-production validation process.<br /></p><p><small>Sep <var data-var='date'>15</var>, <var data-var='time'>18:21</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26396329 2025-09-10T14:02:41Z 2025-09-16T15:40:14Z Incident with Actions <p><small>Sep <var data-var='date'>10</var>, <var data-var='time'>14:02</var> UTC</small><br><strong>Resolved</strong> - On September 10, 2025 between 13:00 and 14:15 UTC, Actions users experienced failed jobs and run start delays for Ubuntu 24 and Ubuntu 22 jobs on standard runners in private repositories. Additionally, larger runner customers experienced run start delays for runner groups with private networking configured in the eastus2 region. This was due to an outage in an underlying compute service provider in eastus2. 1.06% of Ubuntu 24 jobs and 0.16% of Ubuntu 22 jobs failed during this period. Jobs for larger runners using private networking in the eastus2 region were unable to start for the duration of the incident.<br /><br />We have identified and are working on improvements in our resilience to single partner region outages for standard runners so impact is reduced in similar scenarios in the future.</p><p><small>Sep <var data-var='date'>10</var>, <var data-var='time'>13:31</var> UTC</small><br><strong>Update</strong> - Actions hosted runners are taking longer to come online, leading to high wait times or job failures.</p><p><small>Sep <var data-var='date'>10</var>, <var data-var='time'>13:23</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26342409 2025-09-04T20:25:47Z 2025-09-09T16:08:32Z Degraded REST API success rates for some customers <p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>20:25</var> UTC</small><br><strong>Resolved</strong> - On September 4, 2025 between 15:30 UTC and 20:00 UTC the REST API endpoints git/refs, git/refs/*, and git/matching-refs/* were degraded and returned elevated errors for repositories with reference counts over 22k. On average, the request error rate to these specific endpoints was 0.5%. Overall REST API availability remained 99.9999%. This was due to the introduction of a code change that added latency to reference evaluations and overly affected repositories with many branches, tags, or other references.<br /><br />We mitigated the incident by reverting the new code.<br /><br />We are working to improve performance testing and to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>20:05</var> UTC</small><br><strong>Update</strong> - The deployment has completed and we expect customers who have been impacted to see recovery. We are continuing to monitor.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - We are in the process of deploying the PR to revert the change that was causing timeouts to this endpoint. We will update again once that deployment is complete.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>18:57</var> UTC</small><br><strong>Update</strong> - We have identified a deployed change that correlates with the increase in 5XX errors to the GitRefs REST API. This is particularly affecting requests for repos with very large numbers of commits. We are working on rolling back this change which we expect will resolve the issue.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>18:52</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>18:18</var> UTC</small><br><strong>Update</strong> - Customers are experiencing 504 responses for some API requests for regarding repo refs/tags. We are investigating.</p><p><small>Sep <var data-var='date'> 4</var>, <var data-var='time'>18:16</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26316711 2025-09-02T15:44:41Z 2025-09-10T13:16:15Z Loading avatars might fail for a 0.5% of total users and 100% users around the Arabian Peninsula. We are investigating. <p><small>Sep <var data-var='date'> 2</var>, <var data-var='time'>15:44</var> UTC</small><br><strong>Resolved</strong> - Between August 21, 2025 at 15:00 UTC, and September 2, 2025 at 15:22 UTC the avatars.githubusercontent.com image service was degraded and failed to display user avatars for users in the Middle East. During this time, avatar images appeared broken on github.com for affected users. On average, this impacted about 82% of users routed through one of our Middle East-based points-of-presence, which represents about 0.14% of global users.<br /><br />This was due to a configuration change within GitHub's edge infrastructure in the affected region, causing HTTP requests to fail. As a result, image requests returned HTTP 503 errors. The failure to detect the issues was the result of an alerting threshold set too low.<br /><br />We mitigated the incident by removing the affected site from service, which restored avatar serving for impacted users.<br /><br />To prevent this from recurring, we have tuned configuration defaults for graceful degradation. We also added new health checks to automatically shift traffic from impacted sites. We are updating our monitoring to prevent undetected errors like this in the future.</p><p><small>Sep <var data-var='date'> 2</var>, <var data-var='time'>15:17</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26259467 2025-08-27T21:27:58Z 2025-09-11T19:23:56Z Disruption with some GitHub services <p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>21:27</var> UTC</small><br><strong>Resolved</strong> - On August 27, 2025 between 20:35 and 21:17 UTC, Copilot, Web and REST API traffic experienced degraded performance. Copilot saw an average of 36% of requests fail with a peak failure rate of 77%. Approximately 2% of all non-Copilot Web and REST API traffic requests failed.<br /><br />This incident occurred after we initiated a production database migration to drop a column from a table backing copilot functionality. While the column was no longer in direct use, our ORM continued to reference the dropped column. This led to a large number of 5xx responses and was similar to the incident on August 5th. At 21:15 UTC, we applied a fix to the production schema and by 21:17 UTC, all services had fully recovered.<br /><br />While repairs were in progress to avoid this situation, they were not completed quickly enough to prevent a second incident. We have now implemented a temporary block for all drop column operations as an immediate solution while we add more safeguards to prevent similar issues from occurring in the future. We are also implementing graceful degradation so that Copilot issues will not impact other features of our product.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>21:27</var> UTC</small><br><strong>Update</strong> - API Requests and Issues are operating normally.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>21:25</var> UTC</small><br><strong>Update</strong> - We've discovered the cause of the service disruption and applied a mitigation.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>21:13</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate this issue.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>20:58</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>20:55</var> UTC</small><br><strong>Update</strong> - The team is aware of the root cause of this issue and is working to mitigate the issue quickly.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>20:50</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>20:48</var> UTC</small><br><strong>Update</strong> - API Requests is experiencing degraded availability. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>27</var>, <var data-var='time'>20:41</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26195593 2025-08-21T18:13:02Z 2025-08-26T12:44:42Z Incident with Actions <p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>18:13</var> UTC</small><br><strong>Resolved</strong> - On August 21, 2025, from approximately 15:37 UTC to 18:10 UTC, customers experienced increased delays and failures when starting jobs on GitHub Actions using standard hosted runners. This was caused by connectivity issues in our East US region, which prevented runners from retrieving jobs and sending progress updates. As a result, capacity was significantly reduced, especially for busier configurations, leading to queuing and service interruptions. Approximately 8.05% of jobs on public standard Ubuntu24 runners and 3.4% of jobs on private standard Ubuntu24 runners did not start as expected.<br /><br />By 18:10 UTC, we had mitigated the issue by provisioning additional resources in the affected region and burning down the backlog of queued runner assignments. By the end of that day, we deployed changes to improve runner connectivity resilience and graceful degradation in similar situations. We are also taking further steps to improve system resiliency by enhancing observability of network connection health with runners and improving load distribution and failover handling to help prevent similar issues in the future.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>17:58</var> UTC</small><br><strong>Update</strong> - We've applied a mitigation to fix the issues with queuing and running Actions jobs. We are seeing improvements in telemetry and are monitoring for full recovery.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>17:21</var> UTC</small><br><strong>Update</strong> - The team continues to investigate issues with some Actions jobs on Hosted Runners being queued for a long time and a percentage of jobs failing. We are increasing runner capacity and will continue providing updates on the progress towards mitigation.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>16:43</var> UTC</small><br><strong>Update</strong> - The team continues to investigate issues with some Actions jobs on Hosted Runners being queued for a long time and a percentage of jobs failing. We will continue providing updates on the progress towards mitigation.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>16:05</var> UTC</small><br><strong>Update</strong> - We are investigating reports of slow queue times for Hosted Runners, leading to high wait times.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>15:54</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> tag:www.githubstatus.com,2005:Incident/26190305 2025-08-21T06:58:33Z 2025-08-25T17:39:46Z Incident with Issues and Git Operations <p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:58</var> UTC</small><br><strong>Resolved</strong> - On August 21st, 2025, between 6:15am UTC and 6:25am UTC Git and Web operations were degraded and saw intermittent errors. On average, the error rate was 1% for API and Web requests. This was due to database infrastructure automated maintenance reducing capacity below our tolerated threshold.<br /><br />The incident was resolved when the impacted infrastructure self-healed and returned to normal operating capacity.<br /><br />We are adding guardrails to reduce the impact of this type of maintenance in the future.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:58</var> UTC</small><br><strong>Update</strong> - Git Operations is operating normally.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:58</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:58</var> UTC</small><br><strong>Update</strong> - The errors in our database infrastructure were related to some maintenance events that had more impact than expected. We will provide more details and follow ups when we post a public summary for this incident in the coming days. All impact to customers is resolved.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:39</var> UTC</small><br><strong>Update</strong> - We saw a brief spike in failures related to some of our database infrastructure. Everything has recovered but we are continuing to investigate to ensure we don't see any reoccurrence.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:30</var> UTC</small><br><strong>Update</strong> - Approximately 1% of API and web requests are seeing intermittent errors. Some customers may see some push errors. We are currently investigating.</p><p><small>Aug <var data-var='date'>21</var>, <var data-var='time'>06:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations and Issues</p> tag:www.githubstatus.com,2005:Incident/26183695 2025-08-20T16:37:16Z 2025-08-28T15:25:53Z Disruption with some GitHub services <p><small>Aug <var data-var='date'>20</var>, <var data-var='time'>16:37</var> UTC</small><br><strong>Resolved</strong> - Between 15:49 and 16:37 UTC on 20 Aug 2025, creating a new GitHub account via the web signup page consistently returned server errors, and users were unable to complete signup during this 48-minute window. We detected the issue at 16:04 UTC and restored normal signup functionality by 16:37 UTC. A recent change to signup flow logic caused all attempts to error. The change was rolled back to restore service. This exposed a gap in our test coverage that we are fixing.</p><p><small>Aug <var data-var='date'>20</var>, <var data-var='time'>16:37</var> UTC</small><br><strong>Update</strong> - We have verified that we fixed the sign up flow and are working to ensure we don't introduce an issue like this in the future.</p><p><small>Aug <var data-var='date'>20</var>, <var data-var='time'>16:24</var> UTC</small><br><strong>Update</strong> - Customers may experience issues when signing up for new GitHub accounts. We are actively working on a mitigation and will post an update within 30 minutes.</p><p><small>Aug <var data-var='date'>20</var>, <var data-var='time'>16:14</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26171249 2025-08-19T14:46:58Z 2025-08-22T19:54:31Z Disruption with some GitHub services <p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Resolved</strong> - On August 19, 2025, between 13:35 UTC and 14:33 UTC, GitHub search was in a degraded state. When searching for pull requests, issues, and workflow runs, users would have seen some slow, empty or incomplete results. In some cases, pull requests failed to load.<br /><br />The incident was triggered by intermittent connectivity issues between our load balancers and search hosts. While retry logic initially masked these problems, retry queues eventually overwhelmed the load balancers, causing failure. The incident was mitigated at 14:33 UTC by throttling our search index pipeline. <br /><br />Our automated alerting and internal retries reduced the impact of this event significantly. As a result of this incident we believe we have identified a faster way to mitigate it in the future. We are also working on multiple solutions to resolve the underlying connectivity issues.<br /></p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>14:46</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>14:45</var> UTC</small><br><strong>Update</strong> - We were able to mitigate the slowness by throttling some search indexing and will work on the issues created by the increased search indexing so they do not have latency impact.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>14:11</var> UTC</small><br><strong>Update</strong> - We are seeing slightly elevated latency on some Issues endpoints and searches for workflow runs in Actions may not return quickly.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>13:44</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>13:39</var> UTC</small><br><strong>Update</strong> - Issues with timeouts when searching</p><p><small>Aug <var data-var='date'>19</var>, <var data-var='time'>13:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> tag:www.githubstatus.com,2005:Incident/26128099 2025-08-14T18:37:12Z 2025-08-18T17:54:42Z Incident with Packages <p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>18:37</var> UTC</small><br><strong>Resolved</strong> - On August 14, 2025, between 17:50 UTC and 18:08 UTC, the Packages NPM Registry service was degraded. During this period, NPM package uploads were unavailable and approximately 50% of download requests failed. We identified the root cause as a sudden spike in Packages publishing activity that exceeded our service capacity limits. We are implementing better guardrails to protect the service against unexpected traffic surges and improving our incident response runbooks to ensure faster mitigation of similar issues.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>18:37</var> UTC</small><br><strong>Update</strong> - The NPM registry has now returned to normal functioning.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>18:11</var> UTC</small><br><strong>Update</strong> - The NPM registry service is currently experiencing intermittent availability issues. Other package registries should be unaffected. Investigations are ongoing.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>18:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Packages</p> tag:www.githubstatus.com,2005:Incident/26121476 2025-08-14T06:23:09Z 2025-08-18T21:28:14Z Incident with Actions <p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>06:23</var> UTC</small><br><strong>Resolved</strong> - On August 14, 2025, between 02:30 UTC and 06:14 UTC, GitHub Actions was degraded. On average, 3% of workflow runs were delayed by at least 5 minutes. The incident was caused by an outage in a downstream dependency that led to failures in backend service connectivity in one region. At 03:59 UTC, we evacuated a majority of services in the impacted region, but some users may have seen ongoing impact until all services were fully evacuated at 06:14 UTC. We are working to improve monitoring and processes of failover to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>05:42</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with service(s): Actions. We will continue to keep users updated on progress towards mitigation.</p><p><small>Aug <var data-var='date'>14</var>, <var data-var='time'>05:03</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>