GitHub header

Degraded Gemini 2.5 Pro experience in Copilot

Incident Report for GitHub

Update

The underlying issue for the lower token limits for Gemini 2.5 Pro has been identified and a fix is in progress. We will update again once we have tested and confirmed that the fix is correct and globally deployed.
Posted Oct 02, 2025 - 17:13 UTC

Update

We are continuing to work with our provider to resolve the issue where some Copilot requests using Gemini 2.5 Pro return an error indicating a bad request due to exceeding the input limit size.
Posted Oct 02, 2025 - 02:52 UTC

Update

We are continuing to investigate and test solutions internally while working with our model provider on a deeper investigation into the cause. We will update again when we have identified a mitigation.
Posted Oct 01, 2025 - 18:16 UTC

Update

We are testing other internal mitigations so that we can return to the higher maximum input length. We are still working with our upstream model provider to understand the contributing factors for this sudden decrease in input limits.
Posted Oct 01, 2025 - 17:37 UTC

Update

We are experiencing a service regression for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. The maximum input length of Gemini 2.5 prompts been decreased. Long prompts or large context windows may result in errors. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.
Posted Oct 01, 2025 - 16:49 UTC

Investigating

We are investigating reports of degraded performance for Copilot
Posted Oct 01, 2025 - 16:43 UTC
This incident affects: Copilot.