Experiencing Data Latency for Trace Data Type – 01/27 – Resolved


Final Update: Wednesday, 27 January 2016 05:45 UTC

We’ve confirmed that all systems are back to normal with no customer impact as of 01/27, 05:39 UTC. Our logs show the incident started on 01/27, 01:47 UTC and that during the 3 hours 52 minutes that it took to resolve the issue 20% of customers experienced data gaps for trace datatype.
  • Root Cause: The failure was due to surge in incoming traffic to the impacted service in AI pipeline.
  • Lessons Learned: We have taken mitigation steps like sampling of incoming data and also collected required telemetry data to investigate more for feasible mitigation options in order to avoid re-occurrence of such issues. 
  • Incident Timeline: 3 Hours & 52 minutes – 01/27, 01:47 UTC through 01/27, 05:39 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Application Insights Service Delivery Team


Update: Wednesday, 27 January 2016 04:49 UTC

Root cause has been isolated to increased traffic which was impacting our pipeline.We have taken mitigation steps to sample out  incoming data and scale out infrastructure components which will help alleviate the current impact.We estimate 60-90 minutes for completely catching up on the latent data.
  • Work Around: none
  • Next Update: Before 01/27 09:00 UTC

-Application Insights Service Delivery Team


Initial Update: Wednesday, 27 January 2016 01:47 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Latency. The following data types are affected: Trace.

  • Work Around: none
  • Next Update: Before 01/27 05:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Application Insights Service Delivery Team


Skip to main content