04/13/2026 | Press release | Distributed by Public on 04/12/2026 23:34
NZNOG 2026 was held in Christchurch in March 2026. The networking community continues to grow, and the attendee count of 260 was the largest so far. New Zealand is a moderately small island economy in the South Pacific, and it appears to make up for its geographic isolation with its openness to innovation and experimentation. The networking community has a long track record of innovation, both in technology and in the underlying investment models for its network infrastructure.
Here's a summary of some of the sessions that I found to be of interest.
Akamai's approach to content distribution
Akamai is a veteran in the Internet content distribution world. In the late 1990s, Akamai launched its original model of placing managed content servers in the racks of consumer retail Internet Service Providers (ISPs). In this way, Akamai was one of the earliest Content Distribution Networks (CDNs) in the Internet. The company took opportunistic web caching services and transformed the model by placing the content source at the edge of the network, close to the users who consume the content. Akamai's CDN service was subsequently expanded into general cloud computing and security service provision.
Akamai later expanded the original placement model by deploying content servers at various exchanges and even some transit networks. Akamai now operate over 4,000 edge Points of Presence (PoPs), and connects to 1,200 of these access networks. The edge PoPs operate in a cache mode. Requests that cannot be served from the edge tier caches are referred back towards the larger mid-tier servers, which can then pull the content from the origin server when it is not already in their local cache.
Akamai have generally used the Domain Name System (DNS) to 'map' a user to an Akamai server. When any of the Akamai nameservers are queried for an Akamai-managed name, the nameserver attempts to triangulate the assumed location of the querying recursive DNS resolver with the set of potential Akamai content servers. It returns the address of what Akamai's resolver calculates as the optimal content server. The DNS response has a low time to live (TTL), pushing new Akamai clients to invoke the same triangulation exercise.
The assumption behind this approach is that a user's location and their DNS recursive resolver are located close together, as the query passed by a recursive resolver to an authoritative nameserver does not contain any details of the end user. In many cases, this assumption of the correlation between a user and their DNS resolver is workable and provides adequate results. The assumption breaks down when the user's query is directed towards an open DNS resolver, such as Google's 8.8.8.8 or Cloudflare's 1.1.1.1 open DNS service platforms.
There is a DNS-based solution to this. In 2016, the IETF published RFC 7871, 'Client Subnet in DNS Queries'. This Explicit Client Subnet (ECS) query option is used by the recursive resolver to attach the encompassing subnet of the client's address to the query passed to the authoritative server. The authoritative server can use this ECS data in its response to indicate the network scope for which this response is intended. The recursive resolver attaches the subnet to the response, and can then use its locally cached copy when there is a match for both the query name and the query source IP subnet in the cache.
This is a significant shift away from the inherent privacy properties of the DNS, as passing an indication of the original querier's identity to authoritative servers is not a conventional behaviour in the DNS framework. The fact that this Client Subnet option can be attached to a query made by the recursive resolver without the permission, or even the knowledge, of the original end user is also a problem.
As RFC 7871 notes:
"If we were just beginning to design this mechanism, and not documenting existing protocol, it is unlikely that we would have done things exactly this way. … We recommend that the feature be turned off by default in all nameserver software, and that operators only enable it explicitly in those circumstances where it provides a clear benefit for their clients. We also encourage the deployment of means to allow users to make use of the opt-out provided. Finally, we recommend that others avoid techniques that may introduce additional metadata in future work, as it may damage user trust."
RFC 7871It's unclear if anyone has listened to this advice.
There is also an issue with the location granularity of the Akamai response. For networks that operate over a wide geographic area, a recursive resolver might be geographically distant from the end user it serves. When Akamai are attempting to triangulate the user location, they are doing so by using the IP address of the recursive resolver used to pass the query to Akamai, not the client's IP address.
Akamai does not operate its own backbone network, content flows from origin servers through mid-tier caches to edge caches entirely over the public Internet. Like the DNS itself, Akamai gathers its performance leverage using content caching close to the edge. This means it is not reliant on the routing system, as is the case with anycast content distribution networks. Akamai's major issue is that it has limited control over the DNS's retention and potential redistribution of DNS responses. Its use of short TTLs and ECS attempts to limit the reuse of DNS responses and force new content clients to go through the same Akamai DNS-based location triangulation process.
When Akamai has made a steering cache decision for the delivery of an item, it appears to be a fixed decision. Other CDNs have gone a step further in attempting to optimize the end-user experience. For example, the steering approach used by Google's YouTube is a hybrid approach, using both DNS steering and breaking the content into chunks that are variously served from nearby content caches. The service uses DNS steering to route the client to a collection of front-end service units that are located close to the client. YouTube then periodically 'dithers' the feed of individual content chunks across the other candidate service units to ensure that the best-performing service delivery unit is being used for the majority of the content traffic.
This generic approach of DNS steering in the service delivery world has some profound implications for the network, in aspects of architecture, design, robustness, and utility. If we no longer rely on carriage services to sustain service quality, then we are no longer reliant on the routing system to produce optimal outcomes in terms of path quality. The true value of adding robust security to the routing environment dissipates in accordance with our reduced dependence on the routing as a service platform.
Arista and an alphabet soup of LAN emulators
If there is a single consistent story of the past few decades in networking, it's a story of increasing capability of connected devices, and a comparable drop in the demand for complex services from the network that they connect to.
Value, in its various forms, is moving out of the network and into the edge devices and the applications that they host. Outsourcing the creation and maintenance of closed connectivity communities, such as virtual private networks, to the network no longer makes sense, given that modern devices and applications offer far greater capabilities than the network provider can deliver.
The response from the vendors of network equipment is to add to the alphabet soup of EVPN VXLAN, EVPN-VPWS, EVPN-VPWS-FXC, EVPN MPLS, VXLAN over IPSEC, SR, SR-TE, SRv6, and on and on. One interpretation is that network equipment vendors are attempting to elevate the perceived value of complex service offerings to support premium pricing models.
If expected levels of customer adoption fail to emerge, vendors may interpret this as a deficit in features or functionality. The typical response is to introduce additional complexity into their equipment and to broaden the range and naming of their service offerings.
The transition of enterprise customers from running their own private servers and private networks - virtual or otherwise - to simply pushing all this back into a cloud provider is accelerating, and no amount of alphabet soup from vendors such as Arista can stop this!
RADIUS still lives!
In the words of the presenter Alan DeKok "RADIUS is the protocol that will never die."
Remote Authentication Dial-In User Service (RADIUS) had its moment in the days of dial-up access services, when a user would make contact with a network access service and provide a username and password. RADIUS was supposedly overtaken by Diameter as a general-purpose Authentication, Access, and Accounting (AAA) service. But RADIUS is ubiquitous, open source, and, interestingly, baked into IEEE 802.1X standards, and for these reasons, it lives on!
It's not that RADIUS is perfect. Far from it. RFC 5050 from 2007 listed several issues and fixes, and the community has continued that effort. There are even some RADIUS implementations containing these fixes. But the generic issue with free software is that it's simultaneously everyone's problem and nobody's problem.
Software from vendors has a controlling entity that is maintaining coherency, tracking bugs, and working on incremental changes to the code base that fix the issues without disturbing all other aspects of the code. A vendor distributes updates to users, in both servers (easier) and embedded clients (mind-bendingly hard at times!). There are many code bases that implement RADIUS, maintained in many different ways - including not at all - and the effort to fix issues and improve the software often exceeds the total capacity available.
Despite this, RADIUS lives on. Some implementations remain vulnerable to decades-old flaws, while others are carefully maintained yet exhibit subtle differences in behaviour. Some codebases try to keep up with the ongoing efforts to identify and resolve issues, while others do not.
Is this different from other widely implemented open source tools? Not really.
PON broadband access networks
Like many developed economies, New Zealand has been deploying fibre optical networks for network access reticulation for more than a decade now. The Enable NZ enterprise network was based on 10G trunks to the Gigabit Passive Optical Network (GPON) optical line termination units, and 2.5 / 1.25Gbps services to the PON splitters that service individual residences.
While a decade ago this may have appeared to be a high-capacity network, network capacity has a tendency to fill over time. The provider therefore undertook an upgrade, replacing Huawei with a Nokia access network and a Cisco core backbone. This upgrade delivered an order-of-magnitude increase in capacity, as well as a more politically acceptable set of network equipment vendors. The resulting architecture provides XGS-PON capability, delivering symmetric 10Gbps access services, along with 100Gbps internal links between the spine infrastructure and the Optical Line Terminals (OLTs).
The upgrade has not been without its issues. Enable NZ has seen problems of:
An audience member asked the presenter whether they would have built a multi-vendor network given all they'd learned operating the platform. The answer was short, sweet, and telling: 'No'!
The inevitable AI presentation
These days, it seems to be mandatory to have a presentation on the application of Artificial Intelligence (AI) techniques in network operations. This time, it's the treatment of network log processing.
There are some caveats about feeding a Large Language Model (LLM) with the network's configuration and diagnostic reports, as the results so far appear to be somewhat variable in quality. The issue, as always, is that AI is not a deductive tool and does not apply semantic knowledge to the domain it is operating in.
LLMs assemble patterns of behaviour, and match new inputs against those observed patterns to infer the most likely next element. Admittedly, it's still early days here, but the promise of replacing the role of human operational support with basic pattern matching seems to be more hype than reality.
Network management
As much as some aspects of network technology appear to be rapidly changing, the area of network management changes incredibly slowly.
Networks were managed in the early 1990s using the Simple Network Management Protocol (SNMP). SNMP modelled a managed device as a collection of registers (or counters), with the basic operations of , and to interrogate and modify the device's operation. There was also the concept of a device-initiated notification when a device's register value changed beyond some threshold value.
A structure was imposed on this collection of registers through the definition of a Management Information Base (MIB), allowing the network manager to infer the operational state of a managed device by examining the values in the device's MIB. It was a clunky approach, but it permitted the addition of network management hooks into otherwise simple devices, and SNMP quickly became the foundation of most network management systems.
SNMP was accompanied by a text Command Line Interface (CLI). These CLIs drew their inspiration from the data initialization statements used in the C language (among others), where the device was modelled as a set of named records, which could contain variables or other records. A configuration was generated by a text sequence naming these records, and enumerating the variables and their values.
This approach had its issues. It was clunky and had a very simplistic (naive) security model, but it persisted for decades
In the early 2000s, the industry moved towards Network Configuration Protocol (NETCONF), which used the concept of Remote Procedure Calls (RPCs) that allowed the management controller to execute more complex functions on the managed device.
The data storage definition was refined in 2010 by Yet Another Next Generation (YANG), a data description language built on a hierarchy of modules, container(s), list(s), and leaf(s). YANG defined bindings for many programming languages and platforms, allowing Application Programming Interface (API) access to the operating state of a managed device.
In 2015, Google open-sourced gRPC, using protocol buffers as the description language for request and reply payloads, with the RPC function layered over HTTP/2, which itself could be layered over Transport Layer Security (TLS). There were gRPC interfaces for common network operations, including gRPC Network Management Interface (gNMI) for configuration and state manipulation, set/get functions and streaming telemetry. There was a gRPC Network Operations Interface (gNOI) that allowed for common operations on a device, including file operations, ping and traceroute, and validation of a link's operational status.
There is certainly an evolutionary path going on here, although very slowly. The initial approach was 'imperative', where particular directives were passed to the managed device in the context of the device's capabilities. The evolution is heading to a 'declarative' approach of describing an end goal to the network management system, and allowing the system to determine the specific actions to achieve that objective.
Frustrated network operators may wonder, if we can already build systems for autonomous self-driving cars, then why can't we build systems for autonomous self-operating networks?
If the answer is that if such a task requires a massive capital investment, then comprehensive network operations automation is still just not an attractive investment proposition. As we plead for better network management tools, we continue to move functionality and value out of the network and load it up into the edge and the application layer. I suspect that the desire for more sophisticated network management tooling is not accompanied by the capability to pay for it! The result is that nothing much happens. Slowly.
Optics
In contrast, the area of optical transmission is a very active one.
The initial optical systems used light-emitting diodes as light sources, and light-sensitive diodes as receivers, with a basic on/off optical keying. Successive refinements of both the transmitter and the receiver have allowed this simple signal modulation to reliably achieve signal rates of 10Gbps, and lab results ten times that speed.
Further speed refinements of this modulation form are generally considered to be somewhat unlikely. If we want to achieve higher optical speeds, we can use the same techniques that we used in electrical signal processing, turning to modulation techniques that change the amplitude, phase, and polarization of the carrier light signal.
This functionality can be implemented in pluggable optical transceivers. These are simple devices with three components: A digital signal processor, a receiver, and a transmitter (Figure 3).
The optical transmitter is a solid-state semiconductor laser, operating at room temperature. LED light sources operate as a 1300nm light source with a large line width of 100nm.
Some other types of lasers used in optical networking are:
A traditional approach to increasing the capacity of a fibre is to use frequency division multiplexing, dividing the total fibre capacity by frequency. This is inefficient as the filters required to pull apart the individual channels need guard bands, or channel separation, to prevent channel crosstalk. Another approach is to use a single carrier with a far higher bandwidth.
Optical receivers can be light-sensitive diodes, where light reaching the diode opens the conductor path. LED receiver switching speeds can be tuned into the gigahertz frequency ranges. If the incoming optical signal is coupled with the output from a local oscillator, then the carrier signal can be removed, and phase shifts used by modulation keying can be detected by a pair of received diodes.
So, we have both simple on-off keying (OOK) and phase-amplitude keying to modulate an optical carrier. For a 400 Gbps data stream, using a carrier with 16 Quadrature Amplitude Modulation (QAM), it becomes necessary to use 75GHz optical spectrum bandwidth. Achieving 800Gbps requires double this bandwidth, and 1.6Tbps double again.
Dense Wave Division Multiplexing (DWDM) of multiple OOK signals use 96 channels, each of 10Gbps, mixed into a common bearer with 50GHz band filters for each channel. Those inter-channel guard bands consume a lot of spectrum on the cable. When we go towards 400G and above, we need to reduce the number of channels down to 12, for example, then we have 400GHz filter blocks. And then we can feed in 400Gbps, 800Gbps, or even 1.6Tbps signals into such coherent systems.
The optical receiver is configured as an intradyne receiver, where the local oscillator is tuned as close to the optical carrier frequency as possible, and within the optical carrier bandwidth. This gives an intermediate frequency of around 5GHz, which is easily handled within the on-chip capabilities of the Digital Signal Processor (DSP). The side-effect of this is that any other carrier signal in the fibre is not detected, as the intradyne receiver only picks up the carrier signal for which the receiver has been tuned.
This allows for a high degree of flexibility, where narrow-band DWDM signals can be used in one part of a fibre's optical spectrum, while another can be used for these wide-band 400GHz spectrum blocks (Figure 4). Coherent detection techniques in fibre are robust and very impressive, and 400GHz muxes provide a lot of flexibility in the design of optical networks. 800G Coherent Extended Reach (ZR+) Double Density (DD) and Octal Small Form-factor Pluggable (OSFP) pluggable optics are now available.
A LEO service for mobiles
Starlink is an impressive Low Earth Orbit (LEO) service. Using phased array software-steerable antennae and a 1m dish, looking 340km to 550km upward to a constellation of 10,000 spacecraft, users can access the Internet at speeds of 200Mbps almost anywhere on the surface of the earth. Amazon Leo is launching 3,000 spacecraft into a slightly higher orbital plane of 600km, apparently offering a service portfolio like that of Starlink, which is equally impressive.
Starlink provides a service for mobile phones, but there are some quite severe restrictions on the signal capabilities of their service. This is due to the unfocused transmission of the mobile device, the limited transmission power, and the 350km distance between the handset and spacecraft. Reports of available capacity for mobile devices using Starlink are around 5Kbps, so the service is limited to SMS and location services and has limited voice capability.
To provide a service to mobile devices like the service from terrestrial networks from a space platform, the basic requirement is to increase the signal power and signal sensitivity. That's where AST SpaceMobile and their spacecraft come into play. AST's spacecraft has a very large 223 square metre antenna. The earth-facing side is an array of phased array units, and the other side is covered by solar panels. AST's deployment plans for 95 of these spacecraft in orbit. Communication with the satellites will use the Q/V bands.
With their large antennae, they intend to operate as a space-based base station with direct-to-handset 4G and 5G digital service using the 3GPP standards, operating in the 900MHz Band 8 LTE spectrum, with conventional voice and data services. AST will use a 5MHz bandwidth allocation, compared to 5G's allocation of 145MHz.
Their commercial model is not a Starlink-style direct retail model, but a wholesale one based on service agreements with local mobile providers. The local mobile provider operates the ground stations and connects the ground stations directly to the terrestrial mobile network.
The New Zealand provider, 2degrees, is looking at using AST's service for disaster recovery and emergency services, remote communications in search and rescue, and mission-critical applications.
There is an interesting set of market tensions here.
Is there a viable market volume for services to mobile handsets alone that can sustain a dedicated service operator such as AST's offering? Or are terrestrial services platforms already so well established that the only exposed markets are niche markets in extremely remote locations with low numbers of subscribers? And is the population of such niche markets sufficiently large to meet the capital return requirements of launching an LEO service?
Starlink and Amazon Leo have headed in the other direction, driven by the market for high-speed data services that are priced competitively with many terrestrial providers. Mobile services are a secondary thought for these LEO constellation operators, and the service quality available to mobile handsets is far lower than users expect from terrestrial 4G and 5G networks.
As SpaceX heads closer to an Initial Public Offering (IPO), investors in the market seem to believe that this data-oriented model is the sustainable one. It's not clear yet whether the AST offering is another run of the Iridium story, which is a distinct risk in its focus on the mobile handset market.
Open fibre data standards
As we have seen over the years, telecommunications infrastructure is a strategic asset. Open disclosure of the location of cables, buildings housing switching equipment, and power stations and generators is all too easily transformed into a target list when hostilities break out. This is by no means a recent concern.
When Britain entered World War 1 one of its first acts was to cut five submarine cables in the English Channel linking Germany to France, Spain, and the Azores, forcing Germany to use radio systems for its international communications.
One of the most serious consequences of the cable cutting for Germany was that Britain was able to intercept and decode the Zimmermann telegram. This was an attempt by Germany to make a secret alliance with Mexico, which stood to gain United States territory as a result.
Without a secure telegraph connection of their own to the Americas, the Germans were allowed to use the US diplomatic telegraph link, which the US believed would assist peace efforts.
Unfortunately for the Germans, this supposedly secure route went through Britain and was monitored by British intelligence. The revelation of this German duplicity was partly responsible for the US later entering the war.
One response is to try to keep the location of telecommunications infrastructure a highly restricted dataset and criminalize its deliberate disclosure, particularly if the intent behind the disclosure was hostile or malicious.
However, shrouding the location of this infrastructure in a veil of secrecy increases the likelihood of accidental disruption, and the precise location of this infrastructure is already well known to capable adversaries. So why not go the other way and make the entire dataset open? This is the motivation for the Open Fibre Data Standard program, supported by the Internet Society (ISOC) and others.
Does this result in a more resilient outcome for users? Do such maps assist in responses to natural disasters? Or does the disclosure of such information act to increase the attack surface for critical infrastructure? Personally, I am of the view that more data is always better.
We are increasingly reliant on private sector investment to build and maintain communications infrastructure, and such loosely coordinated actions by multiple providers are materially assisted by more accurate and timely information on the state of communications infrastructure.
Optical transport
These days, speeds of 100Gbps, 200Gbps and even 400Gbps are old news in trunk communications, as we shift optical networks to 800 Gbps and 1.6Tbps line speeds in optical communications (Figure 6).
There are several technologies that have enabled this trend. The critical shift is a shift from OOK modulation to coherent modulation, using the phase amplitude space to encode more bits per baud and increasing the basic baud rate. This is supported by more capable DSPs.
Such capabilities are bound by the gate density from the chip fabrication process, and higher capabilities in DSPs are enabled as the industry shifts into 3nm and 2nm chip feature sizes. Today's systems are also capable of performing probabilistic shaping of the points used in the phase amplitude space. Instead of falling back from an 8×8 grid used in a 64 QAM space to a 4×4 grid in 16 QAM space, the DSPs can remove just those points in the QAM space where the line noise prevents accurate decoding.
Thin-film lithium niobate (TFLN), Silicon Photonics (SiPh), and Indium Phosphide (InP) represent the three most critical platforms in modern integrated optics, driving high-speed, low-power data communication in the 800Gbps and 1.6Tbps ranges. Coherent pluggable transponders are also evolving. Their power requirements are increasing, and this creates a heat dissipation problem. Liquid-cooled pluggables, which can cool optical modules that operate at 70 to 100 watts, are now on the market for these high-speed, longer-distance applications.
With a wealth of tradeoffs in the design of a fibre backbone that are available these days, the overall design task becomes more complex. A backbone carrier will have many client demands for different capacities and different transmission lengths. The challenge is to manage the overall fibre capacity and optical transponder settings to maxmize the efficiency of the fibre network and do so without unduly inflating the cost of the network.
NZNOG 2026
This is a small selection of the presentations at NZNOG 2026. Check out the full program and the recordings of the sessions.
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.