Historical status reports for 2014
14:30 Primary authentication server has become non-responsive.
System restarted, back online 14:42. Investigations under way.
Mail, authentication and some websites will have been affected.
14:50 Intermittently since 01:50 this morning we've suffered sporadic but brief
connectivity outages and brief periods of packet loss, attributed to a
DDoS attack against another network in the same datacentre. At 14:50 this
afternoon, our upstream corrupted a router config while attempting to
mitigate the DDoS, resulting in loss of connectivity to our racks. This
was rectified at 15:20, but the replacement device failed within 15 mins
and a replacement replacement was unable to be sourced for almost an hour.
Services restored 16:20. During this time, websites, mail and DNS were
all unavailable. Customers whos DSL services were connected before the
outage retained external connectivity. Steps are being taken to reduce
our exposure to a similar incident in future, including relocating some
of our equipment to another datacentre, and replicating the servers at
a third, interstate, datacentre.
21:02 Loss of ADSL services. Appears to be a VPN/Router failure, engineers are
on their way to investigate.
21:47. Services restored. Waiting on complete report, initial indication
is that it was a BGP failure.
03:00 Work from 26-Aug-2014 rescheduled to this timeslot.
06:45 Between 06:45 and 07:00 we will be changing our core router.
We anticipate an outage of under 10 minutes during the changeover.
All services will be affected. (Work rescheduled for 3am, 28-Aug-2014)
06:50 All appears to have completed smoothly. Downtime 100 seconds.
Testing indicates everything operating properly.
06:45 Between 06:45 and 07:45 we will be changing some critical routing
in our core routers. We expect up to 30 minutes during which all
our servers will be unreachable. (This period could be as short
as 30 seconds, but we're warning of a 30-minute outage for safety)
16:30 Two fiber optic cables have been cut, affecting all our outbound
domestic and international connectivity. Engineers have been sent
to investigate and route around the issue. More as it comes to hand.
18:10 Major Distributed Denial-of-Service attack is under way. Many of
our servers, all of our inbound and outbound links affected.
Seeing traffic in excess of 2 gigabytes/minute directed at some
servers. Working with our upstream providers to mitigate the
Services restored to normal around 20:20 although the attack is
still underway. Three teams are constantly monitoring as the
vector changes, and responding. Mitigation and improvement of
automatic mitigation techniques is continuing.
Services affected: basically everything, due to extremely high
latency on all interfaces and over all links.
01:27 Total loss of connectivity.
All links are up, all servers are running. Upstream has lost BGP
routing and no routes are being announced or accepted. They are
investigating the failure mode. Services restored 4:45 but further
work being conducted.
21:00 Loss of connectivity to many (but not all) national and international
destinations. Was upstream of our router. Call logged with carrier,
but fault restored about 21:08. Waiting on information.