Table of Contents Table of Contents
Previous Page  911 / 1143 Next Page
Information
Show Menu
Previous Page 911 / 1143 Next Page
Page Background

These areas of evidence along with other leading performance indicators such as:

Overdue maintenance (inspections, tests, calibration, cleaning etc) routines

Outdated procedures

Overdue assessments & audits

Open actions (from assessments, audits etc)

Failures under test

Standing alarms & overrides

Critical alarms per hour

Work performed without permit

Uncontrolled (unassessed) changes (modifications & overrides)

The list above goes a long way to indicating what the minimum amount of work involved within

‘Functional Safety Management’ really includes.

I would add to the list:

competency of safety related employees

suitable functional safety planning and reporting

Searching through the specific groups formed for functional safety professionals, there are many well

supported publication & posts on very specific subjects such as partial valve stroking, SIF

verification and proof testing. But focused posts around functional safety management even in a

group called ‘Functional Safety’ are very few and far between, hardly viewed and next to no

comments. We have to ask why?

A well run Functional Safety Management system will help the operator to run under safe conditions

and reduce the risk of a major accident to an acceptable level. The actual implementation of such a

system should give us all confidence that we are working in a sufficiently safe environment because

we know the major hazards, what protection measures and how they are performing and managed.

Functional safety planning is a way in which your competent people are allocated the correct roles

and assigned the right activities within each phase. The phases themselves are there to be defined

along with reporting structures and assessments/audits by independent persons.

Why don’t we share that knowledge, even internally? So how are we doing generally? In this “big-

data” world we are really just exchanging information that, in isolation, has limited value.

So, imagine a medium to large scale SIS project from concept to operation which is designed, built,

tested, installed, commissioned and operated on identical Plants A & B. There are no gaps in

competence of the teams and the teams are equal in expertise and experience. The same data is

available to both teams and the major components i.e. sensors, logic solvers and final elements are

from one manufacturer.

If we assess and audit at defined times during the phases and compared notes at year five would we

have two very well designed, maintained and operated systems? Would one have had more systematic

errors, spurious trips and difficult testing regimes? Has production been identical and uninterrupted?

Without any argument we can surmise that the results would at least be different. Good and very

good, good and poor, excellent and terrible, we don’t know.

What would the result have been if the design reviews and assessment results had have been shared?

We would like to think of results more in the realms of very good and excellent. With an added bonus

of interchangeable staff & other resources.