CircleCI
All Systems Operational
Docker Jobs ? Operational
90 days ago
99.64 % uptime
Today
Machine Jobs ? Operational
90 days ago
99.64 % uptime
Today
macOS Jobs ? Operational
90 days ago
99.64 % uptime
Today
Windows Jobs ? Operational
90 days ago
99.64 % uptime
Today
Pipelines & Workflows Operational
90 days ago
99.62 % uptime
Today
CircleCI UI Operational
90 days ago
99.96 % uptime
Today
Artifacts ? Operational
90 days ago
100.0 % uptime
Today
Runner ? Operational
90 days ago
99.64 % uptime
Today
CircleCI Webhooks ? Operational
90 days ago
100.0 % uptime
Today
CircleCI Insights ? Operational
90 days ago
100.0 % uptime
Today
Notifications & Status Updates ? Operational
Billing & Account ? Operational
CircleCI Dependencies ? Operational
AWS ? Operational
Google Cloud Platform Google Cloud DNS Operational
Google Cloud Platform Google Cloud Networking Operational
Google Cloud Platform Google Cloud Storage Operational
Google Cloud Platform Google Compute Engine Operational
mailgun API Operational
mailgun Outbound Delivery Operational
mailgun SMTP Operational
Upstream Services ? Operational
Atlassian Bitbucket API Operational
Atlassian Bitbucket Source downloads Operational
Atlassian Bitbucket SSH Operational
Atlassian Bitbucket Webhooks Operational
Docker Hub ? Operational
GitHub Operational
GitHub API Requests Operational
GitHub Packages ? Operational
GitHub Webhooks Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Sep 28, 2021

No incidents reported today.

Sep 27, 2021
Resolved - Thank you again for your patience and understanding as we worked through this incident. All jobs are processing normally, so we are marking this as Resolved.
Sep 27, 14:46 UTC
Update - We have processed the backlog of jobs and there should no longer be delays with jobs starting. However, there is still some delay with GitHub checks so the status of jobs being reported back may be delayed. Thank you for your continued patience while we monitor the progress.
Sep 27, 14:22 UTC
Update - We are continuing work on processing the backlog of jobs created by the incident. Docker jobs no longer have delays, however, new machine, macOS, and Windows jobs may still be delayed. Thank you for your continued patience while we monitor the progress.
Sep 27, 14:10 UTC
Update - We are continuing work on processing the backlog of jobs created by the incident. New jobs may still be delayed. Thank you for your continued patience while we monitor the progress.
Sep 27, 13:25 UTC
Update - We are continuing to process the backlog of jobs created by the incident. We are still experiencing delays with jobs and are continuing to work on reducing the backlog.
Sep 27, 12:56 UTC
Monitoring - We are continuing to process the backlog of jobs created by the incident. We are still experiencing delays with jobs and are continuing to monitor.
Sep 27, 12:16 UTC
Update - We have taken several actions to restore the service, and jobs are now starting to be processed again.

However, as we are processing the jobs backlog created by the incident, there are significant delays in jobs execution.
Sep 27, 11:48 UTC
Update - We are continuing to work on a fix for this issue.
Sep 27, 11:37 UTC
Update - We are continuing to work on a fix for this issue.
Sep 27, 10:57 UTC
Update - We have now contained the collateral issues; you can reach the CircleCI UI again, however the data is not currently accessible.

We are still working on the initial incident.
Sep 27, 10:14 UTC
Update - We are tirelessly working at fixing this issue.

Some of the actions we are taking are currently also impacting other components/services.

Right now, you might be unable to access the CircleCI UI. This is unfortunately a side-effect of our effort to fix the initial incident.

We will continue to inform you on our progress.
Sep 27, 09:48 UTC
Update - We are still working on a fix that will allow us to fully restore the service in safe manner.

Please accept our apologies for the disruption.

Thank you for your patience while we are working on this issue.
Sep 27, 09:20 UTC
Update - We are continuing our effort to find a fix for this issue.
Sep 27, 08:56 UTC
Identified - We have identified the cause of this issue, and we're currently assessing the actions we need to take to safely restore the service.

We realize this is causing significant disruption to our customers' operations, and we're diligently working on a solution.
Sep 27, 08:30 UTC
Update - We are continuing to investigate the cause of Workflows delays.
Sep 27, 07:46 UTC
Update - We are continuing to investigate the cause of workflows being delayed.
Sep 27, 07:34 UTC
Update - We are continuing to investigate this issue.
Sep 27, 07:22 UTC
Update - We are continuing to investigate this issue.
Sep 27, 07:07 UTC
Investigating - We are currently investigating an issue where workflows are being delayed.
Sep 27, 07:05 UTC
Sep 26, 2021

No incidents reported.

Sep 25, 2021

No incidents reported.

Sep 24, 2021
Resolved - We have rolled back a change that created invalid test results between 17:00 UTC to 19:50 UTC on September 24, 2021
Sep 24, 20:28 UTC
Sep 23, 2021

No incidents reported.

Sep 22, 2021
Resolved - Between 18:50 and 20:17 UTC some jobs may have experienced delays. Customers would not have been charged for delays. A very small number of jobs may have been cancelled and may need to be re-run. We have already resolved the underlying issue and all jobs are running as expected.
Sep 22, 20:24 UTC
Sep 21, 2021

No incidents reported.

Sep 20, 2021

No incidents reported.

Sep 19, 2021

No incidents reported.

Sep 18, 2021

No incidents reported.

Sep 17, 2021

No incidents reported.

Sep 16, 2021

No incidents reported.

Sep 15, 2021

No incidents reported.

Sep 14, 2021
Resolved - Between 16:15 and 16:40 UTC some machine executor jobs may have been run twice. During that time some jobs may have unexpectedly failed. Customers would not have been charged for any repeated job. We have already resolved the underlying issue and all machine jobs are running as intended.
Sep 14, 16:00 UTC
Resolved - This incident is now resolved.

From 9:40 to 10:30 [UTC] some of your jobs might have remained in a "Not running" state if you were already close to your concurrency limit; this led to your jobs being incorrectly limited. This was resolved at about 10:30 [UTC].

As a result of fixing the above issue, a large number of jobs entered our system at once meaning that all customers experienced increased latency in running jobs. This was resolved by 10:47 [UTC].
Sep 14, 11:07 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Sep 14, 10:55 UTC
Identified - The issue has been identified and a fix is being implemented.
Sep 14, 10:54 UTC
Investigating - We are experiencing an issue impacting a subset of builds; some jobs remain in a "Not running" state for up to 5 minutes before starting.
Sep 14, 10:54 UTC