Thanks to visit codestin.com
Credit goes to www.githubstatus.com

GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Oct 18, 2025

No incidents reported today.

Oct 17, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 17, 14:12 UTC
Update - We're investigating an issue with mobile push notifications. All notification types are affected, but notifications remain accessible in the app's inbox. For 2FA authentication, please open the GitHub mobile app directly to complete login.
Oct 17, 14:01 UTC
Investigating - We are currently investigating this issue.
Oct 17, 13:11 UTC
Oct 16, 2025

No incidents reported.

Oct 15, 2025

No incidents reported.

Oct 14, 2025
Resolved - On October 14th, 2025, between 18:26 UTC and 18:57 UTC a subset of unauthenticated requests to the commit endpoint for certain repositories received 503 errors. During the event, the average error rate was 3%, peaking at 3.5% of total requests.

This event was triggered by a recent configuration change and some traffic pattern shifts on the service. We were alerted of the issue immediately and made changes to the configuration in order to mitigate the problem. We are working on automatic mitigation solutions and better traffic handling in order to prevent issues like this in the future.

Oct 14, 18:57 UTC
Investigating - We are currently investigating this issue.
Oct 14, 18:26 UTC
Resolved - On Oct 14th, 2025, between 13:34 UTC and 16:00 UTC the Copilot service was degraded for GPT-5 mini model. On average, 18% of the requests to GPT-5 mini failed due to an issue with our upstream provider.

We notified the upstream provider of the problem as soon as it was detected and mitigated the issue by failing over to other providers. The upstream provider has since resolved the issue.

We are working to improve our failover logic to mitigate similar upstream failures more quickly in the future.

Oct 14, 16:00 UTC
Update - GPT-5-mini is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Oct 14, 16:00 UTC
Update - We are continuing to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.
Other models continue to be available and working as expected.

Oct 14, 15:42 UTC
Update - We continue to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.
Other models continue to be available and working as expected.

Oct 14, 14:50 UTC
Update - We are experiencing degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.

Oct 14, 14:07 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Oct 14, 14:05 UTC
Oct 13, 2025

No incidents reported.

Oct 12, 2025

No incidents reported.

Oct 11, 2025

No incidents reported.

Oct 10, 2025

No incidents reported.

Oct 9, 2025
Resolved - On October 9th, 2025, between 14:35 UTC and 15:21 UTC, a network device in maintenance mode that was undergoing repairs was brought back into production before repairs were completed. Network traffic traversing this device experienced significant packet loss.

Authenticated users of the github.com UI experienced increased latency during the first 5 minutes of the incident. API users experienced up to 7.3% error rates, after which it stabilized to about 0.05% until mitigated. Actions service experienced 24% of runs being delayed for an average of 13 minutes. Large File Storage (LFS) requests experienced minimally increased error rate, with 0.038% of requests erroring.

To prevent similar issues, we are enhancing the validation process for device repairs of this category.

Oct 9, 16:40 UTC
Update - All services have fully recovered.
Oct 9, 16:39 UTC
Update - Actions has fully recovered but Notifications is still experiencing delays. We will continue to update as the system is fully restored to normal operation.
Oct 9, 16:27 UTC
Update - Actions is operating normally.
Oct 9, 16:24 UTC
Update - Pages is operating normally.
Oct 9, 16:08 UTC
Update - Git Operations is operating normally.
Oct 9, 16:04 UTC
Update - Actions and Notifications are still experiencing delays as we process the backlog. We will continue to update as the system is fully restored to normal operation.
Oct 9, 16:02 UTC
Update - Pull Requests is operating normally.
Oct 9, 15:51 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:48 UTC
Update - We are seeing full recovery in many of our systems, but delays are still expected for actions. We will continue to update as the system is fully restored to normal operation.
Oct 9, 15:44 UTC
Update - Webhooks is operating normally.
Oct 9, 15:43 UTC
Update - Webhooks is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:40 UTC
Update - Issues is operating normally.
Oct 9, 15:39 UTC
Update - Pull Requests is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:38 UTC
Update - API Requests is operating normally.
Oct 9, 15:26 UTC
Update - We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.
Oct 9, 15:25 UTC
Update - Pull Requests is experiencing degraded availability. We are continuing to investigate.
Oct 9, 15:20 UTC
Update - Git Operations is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:20 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Oct 9, 15:17 UTC
Update - We are investigating widespread reports of delays and increased latency in various services. We will continue to keep users updated on progress toward mitigation.
Oct 9, 15:11 UTC
Update - Issues is experiencing degraded availability. We are continuing to investigate.
Oct 9, 15:09 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:09 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Oct 9, 15:09 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Oct 9, 14:50 UTC
Investigating - We are investigating reports of degraded availability for Webhooks
Oct 9, 14:45 UTC
Resolved - Between 13:39 UTC and 13:42 UTC on Oct 9, 2025, around 2.3% of REST API calls and 0.4% Web traffic were impacted due to the partial rollout of a new feature that had more impact on one of our primary databases than anticipated. When the feature was partially rolled out it performed an excessive number of writes per request which caused excessive latency for writes from other API and Web endpoints and resulted in 5xx errors to customers.

The issue was identified by our automatic alerting and reverted by turning down the percentage of traffic to the new feature, which led to recovery of the data cluster and services.

We are working to improve the way we roll out new features like this and move the specific writes from this incident to a storage solution more suited to this type of activity. We have also optimized this particular feature to avoid its rollout from having future impact on other areas of the site. We are also investigating how we can even more quickly identify issues like this.

Oct 9, 13:56 UTC
Update - A feature was partially rolled out that had high impact on one of our primary databases but we were able to roll it back. All services are recovered but we will monitor for recovery before statusing green.
Oct 9, 13:54 UTC
Investigating - We are currently investigating this issue.
Oct 9, 13:52 UTC
Oct 8, 2025
Resolved - On October 7, 2025, between 7:48 PM UTC and October 8, 12:05 AM UTC (approximately 4 hours and 17 minutes), the audit log service was degraded, creating a backlog and delaying availability of new audit log events. The issue originated in a third-party dependency.

We mitigated the incident by working with the vendor to identify and resolve the issue. Write operations recovered first, followed by the processing of the accumulated backlog of audit log events.

We are working to improve our monitoring and alerting for audit log ingestion delays and strengthen our incident response procedures to reduce our time to detection and mitigation of issues like this one in the future.

Oct 8, 00:05 UTC
Update - We are seeing recovery of audit log ingestion and continue to monitor recovery.
Oct 7, 22:45 UTC
Update - We are seeing recovery of audit log ingestion and continue to monitor recovery.
Oct 7, 21:51 UTC
Update - We continue to apply mitigations and monitor for recovery.
Oct 7, 21:17 UTC
Update - We have identified an issue causing delayed audit log event ingestion and are working on a mitigation.
Oct 7, 20:33 UTC
Update - Ingestion of new audit log events is delayed
Oct 7, 19:48 UTC
Investigating - We are currently investigating this issue.
Oct 7, 19:48 UTC
Oct 7, 2025
Oct 6, 2025

No incidents reported.

Oct 5, 2025

No incidents reported.

Oct 4, 2025

No incidents reported.