Every Adelaide business has one. It might be a converted storeroom, a corner of the warehouse, or a cupboard next to the kitchen. It houses the server, the switches, the patch panel, and a UPS that was last tested sometime during the Howard government. There is probably a split-system air conditioner bolted to the wall. The door lock is a padlock, or the door does not lock at all.
This is the office comms room, and it is where a surprising number of Adelaide businesses house their most critical infrastructure. It works — until it does not. And when it does not, the consequences are disproportionate to the size of the room.
What the typical comms room is actually protecting against
Most office comms rooms were designed to consolidate cabling, not to provide enterprise-grade infrastructure protection. When you inventory what is actually there, the gaps become obvious:
- Power: Single-phase from the building's main board, with a UPS sized for a short runtime — typically 10 to 20 minutes. No generator backup. No dual power paths.
- Cooling: A wall-mounted split system, often the same one installed years ago and not on a dedicated maintenance contract. If it fails in January during an Adelaide summer heatwave, the equipment it is cooling fails shortly after.
- Physical security: In the best cases, a locked door with a key held by two or three people. In the worst cases, a door that is left open because it is also used for storage.
- Monitoring: Typically none. If a server goes down at 11pm, the first person to know is the one who tries to log in at 8am.
None of this is negligence — it is simply the reality of infrastructure that grew organically rather than being purpose-built.
South Australia's power grid and why it matters
Adelaide businesses face a specific challenge that businesses in other states do not feel as acutely: South Australia's electricity grid has a well-documented history of volatility. The state's transition to high levels of renewable generation, while positive for emissions, has at times introduced grid stability challenges that translate to more frequent voltage events, micro-outages, and in severe cases, extended blackouts.
The September 2016 statewide blackout is the most dramatic example, but localised outages affecting suburbs or business districts are a regular occurrence, particularly during extreme heat events when demand peaks and transmission infrastructure is under stress. A UPS with a ten-minute runtime is not a solution in that context — it is a countdown timer.
Servers and network equipment are particularly sensitive to power quality issues. A clean power cut followed by a clean restoration is manageable. Voltage sags, surges, and brown-outs — which are common during high-demand periods — can cause hardware damage, data corruption, and unclean shutdowns that leave file systems in an inconsistent state.
What happens when the comms room fails
The failure scenarios are well-understood by any business that has experienced one. When the on-premises server infrastructure goes down:
- Staff cannot access shared files or internal applications
- Email may stop flowing if the mail server is on-premises
- VoIP phone systems hosted on local servers go silent
- Remote desktop and VPN connections back to the office fail for staff working offsite
- Any business system hosted locally — accounting, ERP, practice management software — becomes inaccessible
The business effectively stops. And the timeline to recovery depends entirely on when someone with the right access and skills can physically attend the site — which, outside of business hours, may not be for many hours.
What a professional data centre provides instead
Caznet's Adelaide co-location facility is purpose-built to address exactly these failure modes. The infrastructure differences are significant:
Power redundancy. Data centres run N+1 or 2N power infrastructure, meaning there is always more capacity than required and multiple independent paths. UPS systems are commercial-grade and sized for extended runtimes — not the 15-minute consumer units common in office comms rooms. Diesel generators provide backup that can sustain operations indefinitely while grid power is restored.
Precision cooling. Rather than a split system that was installed by whoever did the office fitout, data centres use precision air conditioning units designed for high-density equipment rooms. Temperature and humidity are monitored continuously and maintained within tight tolerances. Redundant units mean a cooling failure does not cascade into a server failure.
Physical security. Keycard access with individual audit trails, CCTV coverage of all access points and the floor itself, and in many facilities, biometric access for higher-security cages. Physical access to your equipment is logged and controlled — not reliant on who happens to have a key.
24/7 monitoring. Data centre staff monitor environmental conditions and network connectivity around the clock. Issues are identified and responded to before they become failures. If a power event occurs at 2am, there are people there to manage it — not an automated voicemail system and a call-out fee to rouse a technician.
When businesses compare co-location costs to their current setup, they often compare the monthly co-location fee against nothing — because the comms room feels like it has already been paid for. The real comparison includes: UPS battery replacement (every 3–5 years), split-system servicing and eventual replacement, after-hours callout fees when something fails at night or on weekends, the cost of any unplanned downtime, and the staff time spent managing and maintaining on-premises infrastructure. On that basis, co-location is often cost-neutral or cheaper — and the reliability improvement is substantial.
Co-location is not the same as moving to the cloud
It is worth being clear on this distinction. Co-location means your physical servers and networking equipment are relocated to a professional data centre facility — you still own the hardware, you still control the operating systems and applications, and your data does not go anywhere you have not chosen to put it.
This matters for businesses that have compliance requirements about data sovereignty, organisations running software that is not cloud-compatible, or businesses that have made a deliberate decision to keep their workloads on-premises for cost or control reasons. Co-location gives you the infrastructure advantages of a data centre without requiring any changes to your applications or operating model.
The Adelaide advantage
Choosing a local Adelaide co-location facility rather than a Sydney or Melbourne data centre matters more than it might appear. Latency between the data centre and your office affects the performance of every application your staff access remotely. For applications that are latency-sensitive — remote desktop, voice-over-IP, real-time databases — even 20 to 30 milliseconds of additional round-trip time is noticeable.
Local co-location also means that when physical access to your equipment is needed — whether for a hardware swap, a reboot, or an emergency intervention — you or your IT team can be on-site within the hour, not on a plane.
If your comms room is the weak link in your business infrastructure — and for most Adelaide businesses it is — a conversation about co-location is worth having before the next power event forces the issue.