Previous Page  18 / 84 Next Page
Information
Show Menu
Previous Page 18 / 84 Next Page
Page Background

Feeling the heat

Summer, with its long hot days,

warm evenings and holidays, it’s

all fun in the sun. But if summer is

your business’s busiest time of year

and all its critical IT systems go

down, causing chaos for thousands

of your customers and damaging

the company’s reputation, then the

fun fades quicker than any holiday

suntan.

There are certain events that

shouldn’t happen - they can’t be

blamed on the weather, unscheduled

maintenance or even a “power

surge” – as poor planning is always

the better explanation. There has

been much speculation on what

went wrong at BA and there’s

also surprise that anything went

wrong at all given the complexity

and immense scale of an airline’s

business and data centre operations,

estimated at 500 cabinets. It’s

second only to the banking industry

in its size and scale and need for

100% uptime. Safety, security and

customer service depend on it.

Outages are not isolated

incidents

And yet - at a data centre industry

level – this is by far an isolated

incident. A survey commissioned

by Eaton of IT and Data Centre

managers across Europe found that

27% of respondents had suffered

a prolonged outage leading to a

disruptive level of downtime in the

last 3 months. The vast majority

of respondents (82%) agree that

most critical business processes are

dependent on IT and 74% say the

health of the data centre directly

impacts the quality of IT services.

This paints a clear picture that

the business depends on IT and

IT depends on the data centre to

function, so the fact that more than

one in four data centres had recently

suffered a prolonged outage tells

us that something is wrong at an

industry level.

Poor power planning

Just as critical business processes

depend on IT, the data centre itself

must provide resilience to keep the

business running. It’s a core facet

of a business’s risk management

strategy.

The only thing we know for certain

with the example of BA is that

someone or something killed the

power from the data centre, and

whether it was a panicked response

or a lack of knowledge, when they

reapplied the power, incorrect

processes exacerbated all the issues

even further. We should be careful

not to attribute this failure to any

individual technology or person; it’s

a problem of poor understanding of

power that could have and should

have been prevented by proper

processes and power system

design, especially if they’d followed

Industry Lessons: Until Power Is Better

Understood, BA Won’t Be an Isolated Incident

Janne Paananen, Eaton

18 l New-Tech Magazine Europe