Customer Impact
Between 2025-06-19 05:00 am UTC and 2025-06-19 08:30 pm UTC, customers in the EU region experienced degraded triggers in Integration Service — triggers were not being fired or being fired with hours of delay
Root Cause
The incident was caused by a sudden spike in the size of an Azure queue used by Integration Service to process trigger events. This spike originated from a single ServiceNow connection that was generating a high volume of events, which were being added to the queue every minute.
The final step in the trigger execution—sending a notification to fire the trigger—was failing due to a timeout. These failed attempts caused the same events to be re-queued repeatedly. Over time, this led to significant queue congestion, which in turn delayed or degraded the processing of other trigger events
Detection
The issue was not detected by our service-specific alerting systems. It was first brought to our attention through our help channel whether internal users and customers reported that triggers were not working for them
Response
Once identified, the offending connection and corresponding trigger was disabled, and the system was stabilized by clearing the queue for the offending connection and also increasing the event processing timeout (which made sure messages were processed and not requeued). This action restored the availability and performance of the triggers
Follow-Up Actions
To prevent similar incidents in the future, we are taking the following steps: