Moving…

Leave a comment

I wanted to present a note to my followers…

I am in the process of moving my blog from this wordpress site over to the Kenexis web site.  The Kenexis web site is:

http://www.kenexis.com

If you hover over the “NEWS” item in the menu bar, you will be presented with a “subscribe” option that will allow you to re-subscribe to my blog in the new location.  You will also have the option of receiving updates of blogs form other Kenexis contributors.  I hope that you find the blog on the Kenexis web site more useful that what I’ve presented here at word press.  I look forward to hearing form all of you in the future…

Can I Increase SIS Test Intervals to Seven Years?

Leave a comment

I recently received a question from an operating company engineer.  He was asked to confirm that the safety instrumented systems were suitable for increasing the test interval up to seven years from a current figure of five years.  Even though he believed that the calculations would show that the increased test interval was acceptable, he was hesitant to make the drastic two year shift based on simple gut-feel discomfort.  I was able to give him some more technical basis for why his gut was telling him that the increase did not feel right even though the “perfect math” of SIL verification calculations might have been able to justify it.


 

As you know, refineries are always trying to extend the duration of turnarounds in order to minimize expense.  In doing so, they are also increasing the time interval between SIS tests, if tests are only possible during the shutdown that the turnaround provides.  While it might seem that the determination of whether or not these extended intervals are acceptable is a simply matter of re-running the SIL verification calculations with a different test interval, the reality is a bit more complex.

SIL verification calculations depend on failure rates for SIS equipment items.  The data that we use for those failure rates is often collected from actual operating SIS equipment from the field that has been compiled into databases such as OREDA and NPRD.  The database simply shows a single (constant) failure rate for a  device, implying that the single number is an attribute of that specific type of device, but again, the truth is much more complex.

When we collect and use data for failure rate calculations we are making two fundamental assumptions that might not be obvious to all persons who perform SIL verification calculations.  These assumptions are:

  1. Constant Failure Rate
  2. Well designed and well maintained equipment

The first assumption is that the failure rate of an instrument is constant over its entire lifetime.  This assumption, stated another way, implies that the probability of a device failing in year number one is exactly the same as it failing in year 2, 5, 10 or 20.  While the constant failure rate assumption is fairly valid for electronic equipment during its useful life (i.e., after burn-in but before wear out failures start to occur, usually about 10 years after fabrication of the equipment), it is less valid for equipment with wearable moving parts, such as a valve.  As we collect data, we generally do not distinguish when the failure occurred in relation to the installation of the equipment item, so databases will generally create failure rates that are representative of equipment items that are of all of the ages that are typically used.  As such, if most operating companies are turning around tests (and also performing maintenance) at a 5 year interval, then the databases that we use for SIL verification calculations reflect SIS instruments that are in service for up to five years.

If a user goes outside the typical turnaround intervals, increasing intervals to 6 or 7 years, then SIL verifications based on SIS instruments with shorter test intervals do not accurately represent the (increased) failure rates that might be expected as the between-testing and between-maintenance intervals are increased – including instruments that are 6 and 7 years beyond their last test/maintenance.  As such, engineering judgment would indicate that the using typical failure rate data is too aggressive of a stance, but common data based on the increased test intervals is not yet available.

The assumption about the well designed and well maintained system also comes into play in a very similar way.  If users of an instrument perform a routine maintenance task, such as greasing a bearing, replacing packing, or replacing seals is performed at every turnaround, and a failure of those components can cause a failure of the SIS, then the failure rate data is critically dependent on that maintenance actions occurring at the five year interval.  If the maintenance activity does not occur until 6 or 7 years after the start of a run one can infer that the failure rates (especially as the devices reach the 6th and 7th year) will increase.  Since the bulk of industry, from which the typically failure rate data is derived, is performing its maintenance activities on the shorter 4-5 year interval, it again can be inferred that as the test interval is increased to 6 or 7 years, but the data from 4-5 year interval maintenance is used for failure rates, the PFD calculations to verify achieved SIL will also be in error, and in an aggressive non-conservative way.

Unfortunately, making a large increase in test intervals, especially in comparison to what industry peers are doing, may result in non-conservative SIL verification calculations whose data does not accurately represent the operating regime that the plant is in after the test intervals have been increased.  In order to prudently increase test intervals, the between-testing interval needs to be increased more slowly – perhaps one half year at a time, to give time for the actual failure rate data that is being collected by the plant and by industry as a whole to catch up with the new operating profiles of the plant.

Follow

Get every new post delivered to your Inbox.

Join 78 other followers

%d bloggers like this: