Whether you’ve outgrown your current data centre, want to add an existing one for your business, or have just come across some very reasonably priced land that would make for a great new business, there are many considerations you need to make in order to have an efficient, safe, and cost-effective data centre. Here, we take a look at some points you may want to consider when looking into data centre design and build.
It is crucial to have an efficient cooling system so as to avoid damage to electronic equipment – heat can have a dangerous effect on hardware and the people using it. This is especially apparent in a data centre, where there are large amounts of servers in close proximity to each other.
A popular method of maintaining system temperature is using rack-mounted fans. These are arranged in rows, with gaps between them, where conditioned air is drawn in from one end and expelled out of the room at the other end.
Another common cooling method is liquid cooling. These systems are more efficient compared to air cooling but are often more expensive. Liquid systems are installed within the rack, where chilled water is moved around the internal tubes, carrying heat away from the server rack.
In data centres and other computer-filled rooms, humidity control can be very expensive in terms of cost and energy consumption. Modern units can usually handle a safe humidity of 30%-70%. However, more sensitive equipment such as magnetic tape may need a tighter threshold (around 40%-55%).
An effective way to build a data centre with more efficient control over humidification is by using a central control unit which can be powered by waste heat from an air-cooling system, minimising energy consumption. The central unit should allow for seasonal changes – it keeps the humidity lower during drier conditions and more humid conditions (this minimises humidifier and dehumidifier loads respectively).
Data centres require large amounts of power. The national grid delivers this electricity at 400+ volts, meaning it must be stepped-down by a transformer and be converted from alternating current to a direct current (done by a rectifier). Usually, this process takes place individually within each piece of equipment (in the power supply unit); however, in a data centre, this is highly inefficient for both the cooling system and the servers themselves. In most telecommunication exchanges, the power supply is converted to DC only once and is kept at a low voltage – this is not efficient for more power-intensive items like those in a data centre because they would need a higher current. This leads to cable size and therefore, energy losses increasing.
Another vital component of a data centre is backup power. An uninterruptable power supply (UPS) with backup batteries and an emergency generator must be installed, ensuring the batteries are able to completely power the centre through a grid-emergency transition.
Some high-energy demanding data centres may prefer to generate their own electricity. In this case, using a gas-fired generator will power the facility and to reduce cooling costs, waste heat can be used to run absorption chillers. Backup power should also be considered here – utilising a standby diesel generator or even connecting to the national grid and using that as an emergency supply.
The data centre should be remotely monitored as to alert for any malfunction or danger. This can prevent a situation from becoming critical by warning relevant personnel of even minor problems. To ensure all of the centre’s equipment stays in working order, a maintenance inspection is necessary, being carried out as often as the manufacturers of each component recommend. Maintenance contracts are an easy way to do this as it ensures that the procedure is done properly, and that equipment is built to the correct specifications – this is especially important when initially building the data centre, and a qualified contractor should be used.
Data centres must be running consistently and so maximising ease when adding or removing components is essential. Cables should be managed so that individual racks can be connected and disconnected without causing system failure. This includes separating different types of cables to minimise interference.
Recommended replacement of storage devices, servers and switches is every 4 years and cables are much less frequent at every 15-20 years.
As you can see from the above tips, there is much to consider when designing a data centre that best fits your needs. As long as you plan carefully, and take in the tips above, there’s no reason you shouldn’t be able to design a data centre that works perfectly for your needs.