Over recent weeks there were two reports of Internet outages. This is curious in itself, given that one of the original objectives of the Internet was that it was a communications system designed to withstand a nuclear war. But not so for the Internet in New Zealand
What was described as a “massive outage” affecting connections from Wellington, Kapiti, Hutt Valley, Palmerston North and through to Napier took place on Friday 6 June. One estimate was that 90% of customers were affected. The outage lasted more than an hour but was fixed by 12:30 pm.
Chorus reported that the outage was "the result of human error during planned works, which resulted in one of our core ethernet routers for the Wellington region being isolated from our network. The error was identified and corrected, and all services restored within a 1.5-hour period."
At its peak approximately 118,000 services were affected by the outage.
It is understood that the equipment that failed was based in a Chorus site on Wellington's Courtenay Place.
As if this wasn’t bad enough, wireless Internet was compromised a couple of days before the Chorus outage. For some reason this was not reported until after the Chorus outage.
What happened was that an Australian Navy ship on its way to Wellington accidentally blocked wireless internet and radio services across parts of the North and South Islands.
The Guardian reported that on 4th June as HMAS Canberra was passing along New Zealand’s coast on its approach to Wellington, its navigation radar interfered with wireless and radio signals over a large area spanning Taranaki in the North Island to the Marlborough region in the South Island.
It is understood that when the radar was heard on the frequency used by many internet providers and radio stations, those commercial operators had to stop using the channel.
The New Zealand Defence Force said it contacted its Australian counterpart after the issue was reported.
“HMAS Canberra became aware that their navigation radar was interfering with Wi-Fi in the Taranaki to the Marlborough region on approach to Wellington,” an ADF spokesperson said.
“On becoming aware, HMAS Canberra changed frequencies rectifying the interference. There are no ongoing disruptions.”
Further details revealed that the navigation radar from Canberra disrupted 5 GHz wireless access points which triggered in-built switches in the devices that caused them to go offline - a safety precaution to prevent wireless signals interfering with radar systems in New Zealand’s airspace.
Both of these incidents demonstrated a vulnerability in New Zealand’s communications network and especially in the Internet which is becoming increasingly vital to the everyday lives of the majority of New Zealanders.
And this fragile network is meant to survive a nuclear war? Let’s have a look at the background.
The Internet began as a military experiment and evolved through academic collaboration into a publicly accessible network. Its history is a story of innovation, cooperation, and adaptability, shaped by both governmental objectives and technological breakthroughs.
The Internet's roots lie in the Cold War era, specifically in the late 1960s, when the U.S. Department of Defence sought a method of communication that could withstand a nuclear attack or other large-scale disruptions. This led to the creation of the ARPANET (Advanced Research Projects Agency Network), which connected research institutions and allowed them to share computing resources and data.
The key innovation behind ARPANET was packet switching (of which more later), a method of breaking data into small packets that are routed independently through the network. This was a significant shift from traditional circuit-switched communication (used in telephony), making the network more resilient and efficient.
In the 1970s and early 1980s, development continued in academic and research circles. The most important advancement came with the creation of the TCP/IP protocol suite by Vint Cerf and Bob Kahn, which became the standard for data communication. TCP/IP allowed diverse and previously incompatible networks to interconnect—effectively laying the groundwork for what would become the modern Internet.
By January 1, 1983, ARPANET formally adopted TCP/IP, and this date is often marked as the "birth of the Internet." Around this time, the domain name system (DNS) was introduced, making addresses more user-friendly (e.g., replacing numerical IP addresses with names like “mit.edu”).
Though researchers and government agencies had been using the Internet for years, it wasn't until the early 1990s that it became accessible to the general public.
A major turning point came in 1989–1991, when British computer scientist Tim Berners-Lee invented the World Wide Web while working at CERN. He developed:
· HTML (HyperText Markup Language) – the language of web pages,
· HTTP (HyperText Transfer Protocol) – the protocol for transmitting web pages,
· URL (Uniform Resource Locator) – the system for locating web resources.
The first graphical web browser, Mosaic, was released in 1993, making the web much more user-friendly. Commercial Internet Service Providers (ISPs) began to emerge around the same time, offering dial-up access to the public. By the mid-1990s, the Internet had rapidly expanded into homes and businesses.
Initially designed for resilient communication and information sharing, the Internet evolved into a decentralized network for:
· Academic and scientific collaboration
· Commercial transactions and advertising
· Personal communication (e.g., email, chat, social media)
· Media distribution (e.g., video, audio, journalism)
· Civic engagement and political discourse
Its open architecture and lack of a single controlling authority fostered innovation and widespread participation across nations and sectors.
The Internet was designed to be robust and resilient to failure, particularly in its early military-oriented phase. Packet-switching means that even if some parts of the network go down (due to hardware failure, natural disaster, or attack), data can usually find alternative routes.
Data can often still get through by re-routing, but the effectiveness depends on the extent and location of the damage. The internet is designed with redundancy in mind, using multiple pathways to transfer information across global networks. If a particular route is disrupted—whether due to hardware failures, cyberattacks, or natural disasters—data can usually be redirected through alternative routes.
However, there are limits. If a major infrastructure hub goes down (like an undersea cable or a key data centre), regions may experience slower speeds, partial outages, or even complete network loss. In extreme cases, internet access might be cut off entirely until repairs are made. Network administrators and internet service providers (ISPs) work to mitigate this with backup systems and distributed architectures, but disruptions can still have significant consequences.
Data Re-Routing
Data re-routing relies on a complex system of protocols and infrastructure that dynamically adjust the path data takes across networks to ensure reliability and efficiency. Here’s how it works:
1. Packet Switching & Routing
The internet uses a packet-switched network, meaning data is broken into small packets before being sent. Each packet contains:
· The destination address
· The sender’s address
· The actual data being transmitted
Routers examine these packets and determine the best path to send them based on network conditions.
2. Dynamic Routing Protocols
Networks use protocols like Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) to determine the most efficient way to forward data. If a part of the network is down, these protocols automatically find alternate routes to ensure delivery.
· BGP (Border Gateway Protocol) operates between large networks (autonomous systems) and helps data traverse the global internet.
· OSPF (Open Shortest Path First) is used within smaller networks to optimize routing within an organization or local provider.
3. Redundancy & Failover Systems
To prevent major disruptions:
· Internet service providers (ISPs) maintain multiple connections to other networks.
· Large data centers are often connected via multiple fiber-optic links.
· Undersea cables have alternative paths, allowing rerouting in case of damage.
4. Handling Network Failures
When a failure occurs (e.g., damaged cables, cyberattacks, or hardware issues), routing protocols detect the disruption. The affected routers send updated routing tables to inform all connected systems that a particular path is unavailable. Alternative routes are calculated in real-time.
5. Traffic Optimization & Load Balancing
Some networks use load balancing algorithms to distribute data across multiple routes, preventing congestion and ensuring stability. Technologies like Content Delivery Networks (CDNs) help optimize routing by storing data closer to users, reducing dependency on long-distance routing.
6. Undersea Cable & Satellite Re-Routing
When large-scale failures occur, such as damaged undersea cables, satellite links can provide emergency connectivity. Governments and major ISPs often have contingency plans for restoring access via satellites or alternative terrestrial routes.
Even though these systems make the internet highly resilient, large disruptions—especially at key internet hubs—can still slow things down or cause regional outages. Are you curious about a specific type of failure, like a cyberattack or natural disaster scenario?
However, while highly reliable, the Internet is not infallible. Failures can occur due to:
· Physical damage to infrastructure (e.g., undersea cable cuts, power outages)
· Cyberattacks (e.g., DDoS attacks, malware)
· Routing issues or misconfigurations
· Government-imposed shutdowns (common in authoritarian regimes)
· When the Internet fails partially or entirely in a region, alternative communication methods may still operate:
· Mobile networks (though they may also rely on Internet backhaul)
· Satellite communications
· Radio transmissions and mesh networks (used in disasters or remote areas)
· Offline peer-to-peer apps.
In critical situations, decentralized tools and emergency protocols help preserve some degree of connectivity.
So it is clear that the Internet although highly resilient and with a number of built-in redundancies is not completely fail-safe. Certainly the Chorus outage demonstrates that there must be a plan B to ensure the maintenance of connectivity. Internet users will be well-aware of “internet time” and the fact that an outage of 90 minutes may well seem like an eternity.
In concluding there are two matters worthy of comment. The first is that InternetNZ an organisation which is responsible for the .nz Domain Name Space and which has as one of its objectives a free and open internet was silent on both outages. No expressions of concern. No “explainer” to members. Nothing. Yet the maintenance of a resilient internet is essential to their function.
The second point is that the disruption of wireless networks by a military vessel – and a friendly one at that – must cause even greater concern. Given the necessity of immediate information at a time of crisis one wonders how much easier it might be for an unfriendly military vessel to disrupt digital wireless networks. Security of communications must be of prime importance.
TIA - “This is Aotearoa”.
Great potted history of the internet - thank you!