Identified - We've implemented "Phase 1" of performance improvements, increased capacity, and mitigation efforts in this Ashburn PoP.
Phase 2 is planned for later this week or early next week.
We are going to monitor during the next traffic spike and provide an update.
We apologize for this ongoing issue and are trying to make the necessary adjustments to mitigate any performance and reliability issues as soon as possible.
Mar 18, 2025 - 23:01 UTC
Investigating - We're experiencing large-scale attacks / traffic spikes once or twice daily that are resulting in service degradation (timeouts) during these times.
We apologize for the inconvenience and are working on a plan to eliminate the negative effects of these traffic spikes.
We will update this page as soon as we have made the appropriate changes in Ashburn.
Mar 05, 2025 - 19:32 UTC
Update - We have eliminated almost all of the packet loss, but there is still 2-3% packet loss during local, peak traffic. We are seeing if it's possible to eliminate the packet loss in the short term.
Mar 17, 2025 - 22:23 UTC
Monitoring - We are testing a fix and appear to be back at full performance/capacity as of 16:45 UTC.
We are monitoring.
Mar 13, 2025 - 17:11 UTC
Investigating - We are experiencing about 20% packet loss in our Bogota PoP.
This appears to be software-related, and we are investigating, but we are unable to bring this system back into performance for the next 8-12 hours at least.
Investigating - We've seen evidence of occasional timeouts and slow resolution specific to one Chicago location (ORD). One other chicago location, QORD5, is not affected.
Resolved -
We received a large traffic spike (DDoS0 around 15:00 UTC, which targeted a specific type of resource exhaustion. When we deployed additional capacity earlier this week in Ashburn, the new systems had a very low ceiling configured for this particular resource due to an accidental regression in our deployment automation templates last week, and they were unable to recover without manual intervention.
Normally, Quad9's systems would've been able to handle this traffic spike without service degradation and there wouldn't have been resource exhaustion.
After performing a global audit, only these new servers in Ashburn were affected.
We fixed the configuration/regression issue in our deployment system and on these new Ashburn servers.
We apologize for this issue. Ashburn is one of our busiest global PoPs, and this affected a lot of users.
Issue Start: 15:00 UTC Issue Stop: 20:20 UTC
Mar 20, 20:33 UTC
Investigating -
We're seeing major packet loss to our systems in Ashburn.
We are investigating.
Issue Start: 15:00 UTC
Mar 20, 19:28 UTC
Mar 19, 2025
No incidents reported.
Mar 18, 2025
Unresolved incident: Ashburn (IAD) - Daily Service Degration.