Welcome to our status page. If you are looking for help, please check our documentation guides or contact us on our community forum. All products listed below have a target availability of 99.9%
Automation Cloud
Operational
90 days ago
99.94
% uptime
Today
Orchestrator
Operational
90 days ago
99.96
% uptime
Today
Automation Hub
Operational
90 days ago
99.99
% uptime
Today
AI Center
Operational
90 days ago
99.99
% uptime
Today
Action Center
Operational
90 days ago
99.99
% uptime
Today
Apps
Operational
90 days ago
99.97
% uptime
Today
Automation Ops
Operational
90 days ago
99.99
% uptime
Today
Computer Vision
Operational
90 days ago
99.99
% uptime
Today
Cloud Robots - VM
Operational
90 days ago
99.99
% uptime
Today
Customer Portal
Operational
90 days ago
99.99
% uptime
Today
Data Service
Operational
90 days ago
99.98
% uptime
Today
Documentation Portal
Operational
90 days ago
99.99
% uptime
Today
Document Understanding
Operational
90 days ago
99.87
% uptime
Today
Insights
Operational
90 days ago
99.98
% uptime
Today
Integration Service
Operational
90 days ago
99.99
% uptime
Today
Marketplace
Operational
90 days ago
99.99
% uptime
Today
Process Mining
Operational
90 days ago
99.99
% uptime
Today
Task Mining
Operational
90 days ago
99.99
% uptime
Today
Test Manager
Operational
90 days ago
99.99
% uptime
Today
Communications Mining
Operational
90 days ago
99.99
% uptime
Today
Serverless Robots
Operational
90 days ago
99.99
% uptime
Today
Studio Web
Operational
90 days ago
99.99
% uptime
Today
Solutions Management
Operational
90 days ago
99.99
% uptime
Today
Context Grounding
?
Operational
90 days ago
99.99
% uptime
Today
Autopilot for Everyone
Operational
90 days ago
99.99
% uptime
Today
Agentic Orchestration
Operational
90 days ago
100.0
% uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Related
No incidents or maintenance related to this downtime.
Resolved -
This incident has been resolved.
Mar 22, 04:47 UTC
Monitoring -
The system has been stable after applying the mitigation. Thank you for your patience.
Mar 22, 04:34 UTC
Identified -
Two instances of our backend authorization service had their connection pools run out of available connections. We restarted the two service instances as a short term mitigation, and increased the connection pool size limit as a longer term mitigation.
Mar 22, 04:18 UTC
Investigating -
A high percentage of requests to our backend authorization service in the delayed update ring are failing. This is causing issues with filtering tenants in the portal, other services being able to authorization requests, and with enumerating entities in solutions builder flows. Currently it appears this is being caused by a failure to communicate with our backend database in the region.
Mar 22, 04:02 UTC
Resolved -
This incident has been resolved.
Mar 20, 07:29 UTC
Monitoring -
We recently experienced high latency in our application caused by elevated CPU usage at the database level. To address this, we performed a database scale-up to improve performance and stabilize the system. The issue has been mitigated, and we are closely monitoring the environment to ensure continued stability.
We are further investigating the root cause of this spike in usage and will take preventive actions accordingly.
Thank you for your patience as we work to maintain optimal system performance.
Mar 20, 06:28 UTC
Investigating -
we are currently investigating the issue
Mar 20, 05:24 UTC
Resolved -
As part of our ongoing efforts to improve system performance and reliability, we recently upgraded the Istio service mesh across our Kubernetes clusters. This upgrade included changes to the default retry behaviour for intra-cluster communication.
Following the upgrade, we identified and resolved sporadic 503 errors by updating retry configurations for affected applications. All systems are now stable, and we continue to monitor performance to ensure reliability.
Thank you for your patience.
Mar 19, 14:48 UTC
Monitoring -
The system has been stable after applying the mitigation. The investigation is still in progress to resolve the root cause. We will provide next updates soon.
Mar 19, 13:37 UTC
Update -
We have applied a change at the compute layer to mitigate the issue. We will be keeping the system under monitoring and will continue the investigation on root cause.
Mar 19, 11:20 UTC
Investigating -
Customers in Europe region may face intermittent 5xx errors in Apps service. We are currently investigating this issue.
Mar 19, 08:52 UTC
Resolved -
Issue Resolved. We believe this was Azure issue. We are checking with them. Timelines of issue . Start Time: 5:45 PM UTC , End Time: 6:41 PM UTC
Mar 18, 19:39 UTC
Investigating -
We are currently investigating the issue
Mar 18, 19:17 UTC
Resolved -
This incident has been resolved.
Mar 13, 20:46 UTC
Update -
We are continuing to monitor for any further issues.
Mar 13, 18:24 UTC
Monitoring -
A fix has been implemented, and our teams are monitoring.
Mar 13, 17:41 UTC
Update -
We are continuing to investigate this issue.
Mar 13, 17:40 UTC
Investigating -
We are currently investigating apps not being visible from the orchestrator interface for some users in the GXP US environment.
Mar 13, 17:33 UTC
Resolved -
Customers in the Japan and Singapore regions experienced high latencies on UiPath due to sudden traffic spikes on 03/12 from 1:15 - 1:30 AM UTC. Traffic volume doubled within a minute in Japan and increased fivefold in Singapore, exceeding the capacity of the regional API Gateway and resulting in nearly half of the requests timing out. The issue has since been resolved, and our engineering teams are working on a fix to prevent recurrence.
Mar 12, 01:15 UTC
Resolved -
This incident has been resolved.
Mar 11, 20:37 UTC
Monitoring -
The issue has subsided, our teams are monitoring to ensure it does not resurface.
Mar 11, 16:11 UTC
Identified -
The issue has been identified, with the impact limited to customers in the Europe region. Our teams are working to ensure it does not resurface.
Mar 11, 14:29 UTC
Update -
The issue has subsided, but our teams are continuing to investigate the root cause and impact.
Mar 11, 14:11 UTC
Investigating -
Our teams are actively investigating the impact and root cause of this issue. We will provide further updates on priority.
Mar 11, 13:58 UTC