Your Ad Here
Google
 

Thursday, November 29, 2007

High Integrity Protection Systems (HIPS) – Making SIL Calculations Effective

the oil industry, traditional protection systems as defined in American Petroleum Institute (API) 14C are more and more often replaced by high integrity protection systems (HIPS). In particular, this encompasses the well-known high integrity pressure protection systems (HIPPS) used to protect specifically against overpressure. As safety instrumented systems (SIS) they have to be analysed through the formal processes described in the International Electrotechnical Commission (IEC) 61508 and IEC 61511 Standards in order to assess which Safety Integrity Levels (SIL) they are able to claim. What is really important when dealing with safety systems is that the probability of accident is sufficiently low to be acceptable according to the magnitude of the consequences. This can be done in a lot of different ways: applying rules, know-how or standards that may be deterministic, probabilistic, qualitative or quantitative, using reliability analysis and reliability methods and tools, collecting statistics, etc. Among them we find SIL calculations as per IEC 61508 and IEC 61511. Then we have to keep in mind that calculating a SIL is not an end in itself. It is only a tool among many others to help engineers to master safety through the whole life cycle of the safety systems. This proves to be very efficient from organisational point of view but, unfortunately, some problems arise when probabilistic calculations are performed by analysts thinking that it is a very easy job only consisting to apply some magical formulae (found in IEC 61508-Part 6) or to build a kind of ‘Lego’ from certified SILed elements bought from the shelf. Beyond the fact that sound mathematical theorems (Bellman or Gödel) demonstrate that doing it that way gives no guarantee of good results, this is the complete negation of the spirit developed in the reliability field over the last 50 years that is based on a sound knowledge of the probabilistic concepts and in-depth analysis of systems under study. Therefore, a skilled reliability analyst who aims to use the above standards in a clever and compatible way with the traditional analysis has to solve several difficulties: this is simple for the relationship between IEC standards probability concepts and those recognised in the reliability field or for the failure taxonomy and definitions which may need improvements; it is more difficult for handling complex tests and maintenance procedures encountered in oil industry; it is almost impossible for some concepts like the ‘Safe Failure Fraction’ (SFF), which is not really relevant in our field where spurious failures have to be thoroughly considered and avoided.
SIL versus Traditional Concepts
The size of this article being limited, we will only give some indications about our way to manage SIL calculations in an efficient way for oil production installations. Figure 1 shows the links with the traditional concepts. The first protection layer works in continuous mode and the standards impose to calculate its Probability of Failure per Hour (PFH). This is actually an average frequency of failure. When the number of failures over [0, T] is small compared with 1, PFH may be assimilated to F1(T)/T. When this is not the case, T/MTTF shall be used instead. In these formulae F1(T) is the unreliability of this layer over [0,T] and MTTF its classical Mean Time To Fail. Then, in the general cases, PFH cannot be assimilated to a failure rate. Anyway this gives the demand frequency on the second layer, which runs in low demand mode (if the first layer is efficient). Its Probability of Failure on Demand (PFD) as per the standards is in fact its the average unavailability P2. Then F1(T).P2 is the probability that both protection layers fail during a given period T. If there is no more protection layer this is the probability of accident. If a third protection layer is installed this will be is the demand frequency on this layer. Note that the Risk Reduction Factor (RRF) is infinite when working in continuous mode. The standard split, the demand mode between low and high according to the demand frequency (lower or greater than 1/year). From probabilistic calculation point of view we prefer to consider the relationship between test and demand frequencies to do that: when the test frequency is big compared with the demand frequency, PFD may be used, on the contrary it is better to use the unreliability, which provides a conservative estimation. From a failure mode point of view the main problem encountered is that the genuine on demand failures are forgotten by the standards. They are likely to occur when the system experiences sudden changes of states. Therefore, they shall be taken under consideration when calculating the PFD, which comprises both hidden failure (occurring within test intervals) and genuine on-demand failures (due to tests or demands themselves). Another commonly encountered problem is that a superficial reading of the standard leads one to think that every revealed failure becomes automatically safe. This, of course, is not true. It remains unsafe as long as something is done to make it safe. This also has to be considered in the calculations.

No comments: