BETTER PLANT PERFORMANCE THROUGH BETTER ANALYZER MANAGEMENT

BETTER PLANT PERFORMANCE THROUGH BETTER ANALYZER MANAGEMENT Hans van Nuenen KROHNE Oil & Gas Breda, the Netherlands KEYWORDS ANALYZER MANAGEMENT, DA...
Author: Byron Cummings
52 downloads 0 Views 587KB Size
BETTER PLANT PERFORMANCE THROUGH BETTER ANALYZER MANAGEMENT Hans van Nuenen KROHNE Oil & Gas Breda, the Netherlands

KEYWORDS

ANALYZER MANAGEMENT, DATA ACQUISITION INTERFACING, DCS, LIMS, ERP

ABSTRACT For decades, refineries and chemical plants have been spending a significant amount of money on projects for the installation of Quality Measuring Instruments (QMI) including in-line process analyzers, environmental quality analyzers, gas detection sensors etc. In comparison, almost no investment is done on an integrated plant quality data infrastructure which creates means for considerable improvements on the availability and accuracy of these capital assets. Present techniques for automatic data acquisition and data handling minimize the clerical work inherent to manual procedures, hence improving the reliability of data information. Decision makers will benefit from the maximized transparency on true performance of the analyzers in the plant, in particular for custody transfer applications (QMI-release) where the return on investment will become even obvious. INSTRUMENT VALIDATION The high manpower cost of maintenance for QMI’s makes it essential to have better means for cost control. However, reducing the direct cost of maintenance could well lead to unacceptable risk levels of plant unavailability, the cost of which could far outweigh the direct cost avoidance. Therefore a balance has to be found to optimize controlled cost cutting. One of the many time consuming activities in the maintenance work on QMI’s is the validation and calibration of instruments. The validation intervals, if specified by the manufacturer, are typically worst case figures to guarantee maximum accuracy. However, by continuously monitoring the performance of the instruments, it often appears that the validation intervals can be extended considerably, hence reducing cost substantially. Page 1 of 12

A maintenance person may adjust (calibrate) an instrument if a deviation of the expected value occurs. However, by applying some statistics, it often appears that this is not necessary, or even worse, may make the instrument less accurate. In order to apply statistics, historical figures are required. Instrument variation

2

1 .5

Warning Limit

1

0 .5

0 1

4

7

10

13

16

19

22

25

28

31

34

37

40

43

46

49

52

55

58

61

64

67

70

73

76

79

82

85

88

91

94

97

100

- 0.5

-1

- 1.5

-2

Instrument calibration

FIGURE 1. INAPPROPRIATE INSTRUMENT CALIBRATION

Specifically the current generation of instruments is far more reliable and accurate than they are given credit for. Where operators have tended to place blame on analyzers, these instruments are seldom at fault. By monitoring the instruments, a historical record of past performance can be built up to substantiate an assertion, avoiding cost. When the method used to check the instrument accuracy, is not well developed or procedures are not adhered to, the blame of inaccurate operation will normally be put on the instrument. Means have to be provided to detect these situations as early as possible. The normal scatter in results and the variation of the drift in time make it necessary to obtain validation measurements over longer periods in time. The validation process will have to be based upon the use of statistical control techniques. These analytical validation results have a relation to the maintenance activities executed and determine those to be executed. Combination of analytical and maintenance data will lead the way to elimination of error sources in the measuring and checking process, which otherwise go unnoticed. When this is to be executed by hand it is a time consuming exercise and therefore normally not performed. It has been shown at various plant sites that this approach, when automated, leads to better performance of the instruments and more efficient use of maintenance manpower. It has been shown that simplification of the data acquisition and data entry process is required to get sufficient and continuous cooperation of all parties involved, to make the application a success. Data entry facilities have to be present at the actual job locations, to eliminate the need for intermediate hand written notes. Tedious data input can be eliminated. Vague procedures can be replaced by well defined dialogues via operator stations with the use of automatic logging and registration of events and personnel identification. Unnecessary time delays in execution of these guided procedures can be traced [1].

Page 2 of 12

STAKEHOLDERS The bottom-line reason why you need better analyzer management is to be able to give key people better information to make decisions on how to run the plant equipment. Such people include: 

Process Operators in order to keep track of the operational state and overall availability of on-line analyzers



QMI Engineers as they are informed about the actual analyzer performance against target



Laboratory Personnel responsible for the quality of all traceable standards on site and of the certification of quality of all products that are transferred to customers against established contracts



Maintenance Staff in their effort to achieve optimal efficiency in analyzer maintenance activities and to ensure highest availability of process analyzer assets



Plant Management as the ultimate custodian of the equipment concerned with the responsibility for improvement in plant and refinery effectiveness

ANALYZER MANAGEMENT Decision maker

Distributed Control System (DCS)

Laboratory Info. Management System(LIMS)

Plant Information System (PI)

Integrated Analyzer Management System

Enterprise Resource Planning (ERP)

P l a n t I T i n f r a s t r u c t u r e

FIGURE 2. INTEGRATED ANALYZER MANAGEMENT SYSTEM Integrated Analyzer Management is the overall term used to describe the combination of:  Analytical Performance – on a continuous basis  Analyzer Availability – keeping track of operational states  Analyzer Maintainability – making accessible to maintenance

Page 3 of 12

ANALYTICAL PERFORMANCE Validation is the process of confirmation of actual analyzer performance against traceable and accepted standards and is, in contrast with calibration, a non-corrective metrological procedure. Standards are referred to as ‘reference samples’ or ‘calibration materials’. Standards are usually kept within confines of the plant and either maintained by the technicians responsible for the analyzer system or by the laboratory. Traceability shall be achieved through methodology, e.g. as per ASTM [2][3] or SMS [4] methods Accepted refers to mutual agreement between partners about the used standards and methodology followed. Partners are all who have a defined interest in the use of the analyzer system[5]. To verify that an instrument is working correctly, various procedures have to be executed. These procedures are related to accuracy and time response. Methods of validation [6] in relation to accuracy are: 

Reference sample method – The sample reference material is directly introduced at the sample port of the analyzer. The recorded analyzer result is compared with the accepted reference value of the reference sample.



Line sample method – During normal operation of the analyzer a sample is drawn at the analyzer sample port and the corresponding analyzer result is recorded. The sample is analyzed by the laboratory. The analyzer result is then compared with the accepted reference value produced by the laboratory.



Reference measurement method – With this method a portable analyzer is temporarily lined-up in series with the one-line analyzer. The reading of the on-line analyzer is compared with the reading of the portable analyzer. The reference measurement can be treated for data handling in the same way as the line sample method.



Filter method – This is a simple method which resembles the reference sample method and has the same data handling. E.g. an optical filter is used to simulate a change in process conditions.

Methods of validation in relation to timing are:  Response time method – The time measured for the detector to reach its threshold upon exposure to a reference sample 

Timed response method – The response is measured after a predetermined period of exposure to a reference sample

AVAILABILITY Availability is defined as a measure of the degree to which an item is in an operable and committable state at the start of a mission when the mission is called for at a random point in time. Availability is the parameter that translates system reliability and maintainability

Page 4 of 12

characteristics into an index of effectiveness. It is based on the question, “is the equipment available in a working condition when it is needed?” Mean Time Between Failures (MTBF) is the predicted elapsed time between inherent failures of a system during operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The MTBF is typically part of a model that assumes the failed system is immediately repaired (zero elapsed time), as a part of a renewal process. Mean Time To Failure (MTTF), in contrast with MTBF, the MTTF measures the average time between failure with the modeling assumption that the failed system is not repaired but replaced. Mean Time Between Maintenance (MTBM) includes all corrective and preventive actions compared to MTBF which only accounts for failures. Mean Time To Repair (MTTR) is a basic measure of the maintainability of repairable items. It represents the average time required to repair a failed component or device. Expressed mathematically, it is the total corrective maintenance time divided by the total number of corrective maintenance actions during a given period of time. It generally does not include lead time for parts not readily available, or other Administrative or Logistic Downtime (ALDT). Administrative and Logistics Down Time (ALDT) is the time spent waiting for parts, administrative processing, maintenance personnel, or transportation (no maintenance time). Mean Down Time (MDT) is the average time that a system is non-operational. This includes all time associated with repair, corrective and preventive maintenance, self imposed downtime and any logistics or administrative delays. The difference between MDT and MTTR (mean time to repair) is that MDT includes any and all delays involved; MTTR looks solely at repair time. Availability, in the simplest form, is: A = Uptime / (Uptime + Downtime) . Inherent Availability looks at availability from a design perspective: Ai = MTBF / (MTBF+MTTR). Operational Availability In the operational world we talk of the operational availability equation. Operational availability looks at availability by collecting all of the breakdowns in a practical system Ao = MTBM / (MTBM+MDT).

MAINTAINABILITY

Preventive maintenance is generally considered to include both condition-monitoring and lifeextending tasks which are scheduled at regular intervals. Some tasks, such as QMI validation, must be done while the equipment is operating and others, such as internal cleaning, must be done while the equipment is shut down.

Page 5 of 12

Corrective maintenance is defined as maintenance work which involves the repair or replacement of components which have failed or broken down. For failure modes which lend themselves to condition monitoring, corrective maintenance should be the result of a regular inspection which identifies the failure in time for corrective maintenance to be planned and scheduled, then performed during a routine plant outage. When corrective maintenance is done, the equipment should be inspected to identify the reason for the failure and to allow action to be taken to eliminate or reduce the frequency of future similar failures. These inspections should be included in the overall maintenance plan. Predictive Maintenance is the art of knowing in advance which maintenance will be needed and when. The discussion on the value of predictive maintenance has long been decided. There is no doubt that cost, time, personnel, profit, even company reputation can benefit from doing what is needed at the most convenient time.

ANALYZER MANAGEMENT IN HISTORY Historically, Analyzer Management has typically been a manual operation, if done at all. If authorized maintenance personnel start a validation, he first has to inform the operator about the fact that he intends to take the instrument off-line. If not, then normal process operation might be interrupted. Normally a work instruction has been made per type of instrument. In this instruction all steps to be performed to validate and /or calibrate an instrument has been listed. These instructions have to be followed each time a validation has to be performed. The danger in this approach is that the written instructions might be used only the first few times. After a while, the engineer will typically execute the validation from his memory. Of course this will be correct in most cases, but invariably mistakes will be made especially in case of temporary replacement of manpower. Also instructions might change over time without the maintenance engineer noticing. An integrated Analyzer Management System can present on the spot the up to date instructions for each instrument and even more guide the engineer through the procedure. According to the written instructions, readings from instruments will have to be taken. This can be the cause of possible misinterpretations, misreading and typographical errors. Automatic reading at the right moment can be dealt with by an integrated Analyzer Management & Data Acquisition System. Often calculations have to be made, e.g. the percentage of range. The readings have to be entered in a calculator, the calculation has to be made and the result must be written down. Again there are ample possibilities for errors. Calculations can, of course, also be performed by an integrated Analyzer Management System. The results of a validation session will normally be written down on an instrument work sheet. This is a historical overview of all validations, calibrations and repair. Although this gives a good overview per instrument, it is not very well suited to get an overview over several instruments. Therefore, these data are being retyped into a computer database to allow for easy manipulation. It is obvious that this is a time consuming effort with plenty possibilities for mistakes. Page 6 of 12

INTEGRATED ANALYZER MANAGEMENT To avoid the above mentioned problems, an integrated Analyzer Management and Data Acquisition System (AMADAS), such as CalSys® could be implemented with the following objectives in mind: 

Independence with respect to the Distributed Control System (DCS) and instruments – Complete independence of any particular brand of equipment means that the system can be used in any plant without the need for re-design and/or re-implementation. Independence also means that future developments are easier to handle, e.g. it is more likely that new instruments of other vendors can be hooked to the independent integrated Analyzer Management System.

 Communication interfaces to the major DCS and instrument brands – The system should be capable of communicating with various on the market DCS systems and also with different analyzers of any brand. Communication between the DCS and the integrated Analyzer Management System allows for: o The process operator to initiate validations via the DCS o The DCS to give the integrated Analyzer Management and Data Acquisition System (AMADAS) permission to start a validation o The DCS to perform the data-acquisition and process control for the Integrated Analyzer Management System, but only after permission has been granted by the operator to go into maintenance mode. Then the actual validation is executed by the Analyzer Management System. AMADAS can request the DCS for preparing an instrument validation and wait for the permissive signal. This ensures that AMADAS validates the entire chain including A/D conversion and processing in the DCS system. o Automatic or semi-automatic software calibration by means of changing signal conversion constants in the DCS system. Communication between instruments and AMADAS can be useful in cases where the DCS cannot provide all data of a particular instrument. Instruments sometimes use different data channels for measurement data and validation/calibration data. A direct link between AMADAS and the particular instrument will by-pass these limitations and provide the means for automated validation/calibration. 

Elimination of the need for redundancy in AMADAS – As the zero and span correction values resides in the DCS, an interrupted AMADAS operation will not interfere with the process operation of the instruments and with the presentation of the data to the operator via DCS. Hence, there are no strict redundancy requirements for AMADAS which allows for a straight forward and robust IT infrastructure.



Communication interfaces with Laboratory Information Management System (LIMS) – In order to support the Line Sample Method of validation, a direct link between LIMS and AMADAS is required. Time stamped data sets are used to link the laboratory analysis results to the on-line analyzer readings at the moment of sample taken.



Communication interfaces with Enterprise Resource Planning (ERP) system – Preventive maintenance actions can result from the validation sessions executed. Interfacing between

Page 7 of 12

AMADAS and ERP will allow for instant job activity registration and maintenance job ticket submission via the ERP system. 

Direct process interfaces – In some circumstances data required for proper maintenance are not be acquired via the DCS or DCS has no control over the stream selection valves that connect an instrument to the process and calibration equipment. AMADAS is, in these circumstances, capable of having its own analog and digital I/O interfaces to perform the data acquisition and control directly. An interface with DCS is still required in case automatic authorization is required.



De-centralized architecture with central database – It is unrealistic to assume that all validations and calibrations can be automated and controlled remotely. It is more likely that some can be initiated from a central control room while others require that the instrument is within reach. Ideally, the AMADAS workstations are located where the validation/calibration takes place and data is stored in a centralized database server. Decentralization also increases the system availability and allow for multi-disciplinary use of AMADAS.

FIGURE 3. PLANT WIDE INTERFACING

Page 8 of 12

KEY PERFORMANCE INDICATORS Key performance indicators show where an asset is performing well or even over-performing. But more importantly is to know where the assets are underperforming, to what extent and to how to schedule any necessary maintenance.

FIGURE 4. KEY PERFORMANCE INDICATORS Such information is highly transparent. It detects areas of immediate, medium and long-term concern. It can help predict sources of trouble. It focuses on bad actors in the plant assets. But also, it gives the operator the flexibility to schedule his improvements and maintenance in a timely manner. This saves on: 

Time



Costs



People



Convenience



Other Resources

PLANT-WIDE PERFORMANCE SUMMARY The most obvious advantage of the plant wide available information is to track the performance of each individual analyzer. It can also be used to compare different types of analyzers amongst each other to determine the best type for the particular task in terms of performance and reliability. It can even be done on a manufacturer basis. Behind each performance summary is naturally a number of layers of information that can be viewed and interpreted. The bottom line is a drastic increase in transparency on individual analyzer and plant performance.

Page 9 of 12

TYPICAL PETROCHEMICAL PLANT

PERFORMANCE SUMMARY REPORT Report period

01-Aug-09

Site



Plant



Unit



01-Sep-09

Performance Summary

#

%

Total number of measurements

381

Measurements validated

376

Total number of accepted results

351

Average accepted results per measurement

0.9

Measurements used for performance calculation

362

100%

Success rate

313

86.5%

#

%

Breakdown Summary Measurements used for performance calculation

98.7

362

100%

No results

5

1.4%

Insufficient results

3

0.8%

Below expectations

41

11.3%

Adequate

32

8.8%

Tighten control

33

9.1%

Minor problems

27

7.5%

Above expectation

221

61.0%

FIGURE 5. PERFORMANCE SUMMARY REPORT

Page 10 of 12

CONCLUSION Judgment of the analyzer performance is not an easy task and when the accuracy checking method is not well developed or procedures are not followed in detail it may happen that the analyzer will be blamed for inaccurate operation. Therefore means has to be provided to detect these situations as early as possible. The normal scatter in results and the variation of the drift over time makes it necessary to use and interpret validations over longer periods of time. This is why statistical control needs to be applied in the validation process. Tools, like AMADAS, will assist to eliminate sources of potential error which otherwise will not be noticed by showing the relation between analytical and maintenance data. It has been shown at various installations that the application of automated data collection systems have lead to a better performance and a more efficient use of maintenance manpower. However, it has been shown that a high degree in automation of the data generation and data entry is required to obtain sufficient cooperation of all parties involved and so to make the application a success. AMADAS will not only assist to avoid unnecessary calibration (adjustments), but will also make unnecessary time delays visible and so preventing repetition of it in future. Summary of the improvements [7] by application of an AMADAS system: 

The obtained accuracy of the analyzer result is expected to increase. In some cases improvements by a factor 2 are not unusual. When this is taken into account for the ‘critical analyzers’, as not only applied for QMI release purposes, the yield may become enormous when operating a process plant unit more accurately to specs.  less ‘GIVE AWAY’



Reduction of man-hours used for checking of the analyzers. The reduction is caused by Automatic Validation, most efficient when ‘reference method’ is applied. In addition, reduction of man-hours is achieved by minimizing unnecessary validation and calibration due to the consistent use of control charts and statistical methods.  it is estimated that the reduction of man-hours will be between 10 to 50%



Validation initiated by the operator to check whether an analyzer is still performing accurately without the need to call-in an analyzer technician



Availability and reproducibility rates of the analyzers will improve.  at locations where analyzer management systems are used, it was shown that figures can be obtained above 95%



The historical performance interpretation of AMADAS is capable to early notify the presence of a deteriorated performance  It has been estimated that this will increase the QMI-release rates of blends by as much as 20%, especially during the first years of operation.

Page 11 of 12

REFERENCES 1.

Duysings, “Description of the Analyzer Management And Data Acquisition System”, MF 93-1410, Shell International Petroleum Maatschappij, April, 1994.

2.

ASTM D3764, “Standard Practices for Validation of Process Stream Analyzers”.

3.

ASTM D6299, “Applying Statistical Quality Assurance Techniques to evaluate Analytical Measurement System Performance”.

4.

SMS 1006, “Guidelines for Statistical Control of Test Methods”.

5.

Cusell, “Analyzer Management And Data Acquisition System”, OP 97-30425, Shell International Oil Products, October, 1997.

6.

IP 340, “Code of practice for calibrating and checking process analyzers”

7.

Horst, Dick, “AMADAS Benefits”, Hamburg, Germany, November, 2000.

Page 12 of 12