Investigating - Starting around 6:10 UTC on January 17th, Arelion (AS1299) started routing traffic in Silicon Valley to our Los Angeles PoP.

This affects AT&T, and potentially Verizon and Cox traffic as well.

We are conferring with our network partner in PAO.

Jan 19, 2025 - 09:46 UTC
Investigating - We've seen evidence of occasional timeouts and slow resolution specific to one Chicago location (ORD). One other chicago location, QORD5, is not affected.

We are investigating.

Dec 21, 2024 - 01:21 UTC
Investigating - Our Mumbai (bom2) location is experiencing increased latency and some low amounts of packet loss.

Our provider in Mumbai is investigating.

Feb 20, 2024 - 10:27 UTC
Recursive DNS Services ? Operational
90 days ago
100.0 % uptime
Today
Current Status ? Operational
90 days ago
100.0 % uptime
Today
Past incidents (last week) Operational
90 days ago
100.0 % uptime
Today
Scheduled Maintenance Operational
90 days ago
100.0 % uptime
Today
Threat Intelligence API ? Operational
90 days ago
100.0 % uptime
Today
Website ? Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Jan 23, 2025

No incidents reported today.

Jan 22, 2025

No incidents reported.

Jan 21, 2025

No incidents reported.

Jan 20, 2025
Resolved - Service has resumed in Mexico City as of 17:00 UTC.
Jan 20, 20:07 UTC
Identified - This PoP was taken offline due to a physical connection issue with our uplink to our network partner.

We are coordinating troubleshooting efforts with the facility.

Jan 16, 17:12 UTC
Jan 19, 2025

Unresolved incident: Palo Alto (PAO) - Arelion (AT&T, etc) traffic routing to Los Angeles (+15 ms).

Jan 18, 2025

No incidents reported.

Jan 17, 2025
Resolved - This should be resolved.

Since last Sunday, we have been gradually migrating services at this PoP to a newer, more-performant architecture, and we had some issues balancing between legacy and current architectures simultaneously during that migration.

We have implemented compensating controls on the older architecture until the migration is complete, which is planned for Monday, January 20th.

We apologize for the inconvenience.

Issue Start: 7:10 UTC
Issue Stop: 8:05 UTC

Jan 17, 10:40 UTC
Monitoring - We have made adjustments as of 8:05 UTC, which should eliminate packet loss.

We are monitoring.

Jan 17, 08:51 UTC
Investigating - Our Singapore (qsin1) PoP is seeing some packet loss.

We are investigating.

Issue Start: 7:10 UTC

Jan 17, 07:33 UTC
Jan 16, 2025
Resolved - This location had a flapping physical interface and caused packet loss and therefor intermittent service.

Quad9 services have been temporarily withdrawn pending physical troubleshooting of the interface at the facility.

Most traffic will probably route to the US for now.

Jan 16, 17:10 UTC
Jan 15, 2025

No incidents reported.

Jan 14, 2025
Resolved - Now back online.
Jan 14, 22:37 UTC
Investigating - Our Rome location is offline pending an equipment refresh. Most traffic should be routing to Milan in the mean time.
Jan 10, 01:35 UTC
Resolved - Service should be significantly improved as of ~12:00 UTC. We are continuing to monitor.
Jan 14, 12:30 UTC
Investigating - We're experiencing packet loss and service reliability issues in Zurich.

We are investigating further.

Jan 14, 08:31 UTC
Jan 13, 2025
Resolved - This issue is resolved.
Jan 13, 01:58 UTC
Monitoring - Our network partner has potentially resolved the issue.

Our Vienna PoP is online for the moment, and we are monitoring.

Jan 12, 14:03 UTC
Investigating - Our network partner is investigating some intermittent reachability issues with some remote networks, which is causing SERVFAIL responses to a hand full of domains.
Jan 12, 10:54 UTC
Jan 12, 2025
Jan 11, 2025

No incidents reported.

Jan 10, 2025
Jan 9, 2025
Resolved - This PoP is back online.
Jan 9, 22:44 UTC
Identified - The current plan is to bring this PoP back into service on Thursday.
Jan 6, 18:29 UTC
Investigating - Our qlim1 location is temporarily offline due to requiring a complete re-provisioning after disk failure, and we will take the opportunity to upgrade the memory as well.

Some Lima/Peru traffic is routing to Buenos Aires temporarily, adding 50ms to the RTT, though a lot of Peru traffic is routing to our other Lima location (lim).

We are aiming to bring this location back into service as soon as possible.

Jan 3, 22:02 UTC
Resolved - Vienna is back online.
Jan 9, 22:29 UTC
Investigating - After a full hardware refresh of our Vienna (VIE) PoP, there is an issue with one of the connections.

The local provider is working to fix this.

Most Vienna/Austria traffic is routing to Frankfurt in the mean time, resulting in increased RTT of ~20ms.

Jan 3, 22:04 UTC
Resolved - The issue has remained resolved since 18:00 UTC on Jan 7th.

We have identified a crash scenario in our frontend DNS load balancer and have implemented a workaround while we explore a permanent fix with our software partner.

Jan 9, 11:06 UTC
Monitoring - We have implemented a fix for this issue as of 18:00 UTC.

We are monitoring.

Jan 7, 18:13 UTC
Investigating - We are experiencing intermittent service issues at our Delhi (del) PoP.

We are investigating.

Jan 7, 14:41 UTC