Caznet installs and manages equipment across multiple data centre facilities, from SMB co-location through to corporate and education deployments. Over time, we've developed a set of rules we follow every time — rules that exist because of hard-won experience, including a few near-misses we're not proud of.
Here are the six that matter most.
1. Plan it out before you go
Data centres are not places where you figure things out as you go. Once equipment is powered up and running production workloads, opportunities to make changes become rare and risky.
Before you rack anything, you should know exactly where every device is going, how the cabling routes, which power feeds you're using, and how future expansion will fit into the space. A simple rack diagram (Microsoft Visio works fine) saves enormous time and prevents the situation where you've used up all your 1U spaces and need to rearrange running production servers.
Think about airflow, cable management, and service access. The equipment you install might sit in that rack for ten years. Plan for it.
2. Rack it properly — all the way
Every screw goes in. Every cable gets dressed. Every device is properly secured before the next one goes in.
We once discovered a critical storage device that had been left unsecured in a rack — supported only by its cables and catching on a shelf edge by millimetres. It had been sitting like that for years, in production, without anyone noticing. One wrong cable pull and it would have fallen into the server below it.
Don't take shortcuts. Install each device fully before moving to the next one. The extra five minutes per device is nothing compared to the consequences of a device failing because it wasn't mounted correctly.
3. Use RackStuds
If you're still using traditional cage nuts, try RackStuds. They're plastic fastening clips that install and remove without tools, hold up to 20kg per stud, and dramatically reduce the time and frustration of rack installation.
Traditional cage nuts require a specific tool, are easy to drop (and subsequently impossible to find in a raised floor environment), and have sharp edges that have drawn blood in every data centre we've worked in. RackStuds solve all of this. They cost slightly more but are worth every cent.
4. Colour code and label everything
A data centre that isn't labelled is a data centre that will eventually cause an outage.
Power cables should follow a consistent colour scheme: one colour for A-side (primary) power feeds, another for B-side (secondary). Fibre cables should distinguish between multimode and singlemode. Every cable endpoint — patch panel port, switch port, and device — should be labelled with what it connects to.
Every device should be labelled with its hostname and management IP address. The label should be readable from the front and, where possible, the rear. Six months after installation, when you're trying to identify a device in an emergency, you will be glad you did this.
5. Leave room — don't fill the rack
It's tempting to make full use of the rack space you're paying for. Resist it. Cramming a rack to 100% capacity creates problems in every direction.
Airflow is the biggest issue. Data centre cooling works on the principle of cold air at the front of the rack, hot air exhausted from the rear. A full rack impedes airflow, runs equipment hotter, and shortens hardware life. Aim to keep racks at 50–75% capacity maximum.
Beyond airflow: future expansion, cable management, and the ability to physically work in the rack all require space. Installing a device in a full rack means disturbing running equipment. That's a risk you don't want to take.
6. Configure out-of-band management before you leave
Out-of-band management (OOBM) — Dell iDRAC, HP iLO, Cisco CIMC, or equivalent — gives you remote access to a server's console even when the operating system is unresponsive or the network is down.
Configure it before you leave the data centre. This means: assigning a dedicated IP address on a separate management network or 4G connection, setting credentials, testing remote access, and verifying it works independently of the production network.