Pages

Friday, July 9, 2010

Why do PdM Programs Fail - PDF

To down load the following three posts click  here

Why PdM Programs Fail - Part 3

Why Do Predictive Maintenance Programs Fail?
by Alan Friedman

In the past few years we have witnessed a marked change in predictive maintenance (PdM) practices whereby more and more companies are choosing to outsource their programs. While many facilities routinely calculate 20:1 return on investment metrics, others cynically refer to aging data collectors as “dust collectors” or use them as bookends.

 Although the concept of PdM is now widely known, and its potential benefits generally accepted, many plants have failed to successfully exploit the available techniques and technologies in practice. This state of affairs begs the question: “Why do some programs succeed while others fail?”

As we enter a recession and maintenance staffs are cut, we will once again be asked to do more with less. This means that now we need to think about how we conduct maintenance and determine how to do it more efficiently and intelligently in the future whether that is through new internal processes or outside help. As we rise to meet the challenges of the emerging economy, we implement best practices, restructure, invest in infrastructure and are prepared to hit the ground running when the economy turns upwards again.

In the coming months, I will be writing a number of articles addressing the subject of why PdM programs succeed or fail from the managerial, technical and financial perspectives. Whether one decides to use this information to beef up or restart an in-house program, determine what type of training may be best or to outsource some, or all, of these functions, the hope is to provide enough practical information to help you be successful in your endeavor. The article you are reading now will touch on some of the main themes that we will be exploring in more detail in the future.

Lack of Vision
No program can succeed if it is not well conceived. If done correctly, a predictive maintenance program should change the culture, philosophy and work flow of the maintenance department. It is not just the addition of a new technology or tool, but a different approach or strategy towards maintaining one’s assets. This approach is being undertaken in order to gain specific benefits that can and should be measured. These benefits include: increased uptime, reduced failures, shorter planned outages, fewer preventive maintenance actions and, ultimately, a more efficient facility. Failure to adapt the culture to this new philosophy, and benchmark the gains, will eventually lead to the program’s dissolution. Adopting new technologies without changing maintenance strategies will not produce the desired benefits.

Using a Tool without Understanding Why
Many facilities purchase a new technology, such as a vibration data collector or alignment tool, spend time and money learning how to use the tool, but little time understanding why it is being used. As an example, a particular facility I know of had the capacity and ability to detect incipient bearing wear in a pump using a vibration analysis system. Although the pump showed no signs of wear, the facility went ahead and changed out the bearings according to their preventive maintenance schedule. At another plant, a vibration analyst was adept at detecting mechanical faults in his plant’s machinery, but he was afraid to tell his supervisor about all of the problems he found because his supervisor might get angry at having to repair all of these machines! Both of these cases demonstrate the use of the technology as an end in itself without an overall vision of why the technology is being employed.

Failure to Justify the Program
In those facilities where the technology is being used correctly, and in the right context, I have often seen a program fail because its successes were not adequately documented. This is to say that the facility changed their philosophy to a predictive mode, correctly employed technology to reduce preventive maintenance actions and minimized catastrophic failures, but they failed to adequately document the efficiencies and savings associated with these actions. So, while employees within the maintenance department acknowledged that their work was useful, they had no data to prove this to those outside of their group. Sadly, they then saw their program get cut when managers had to tighten their budgets. In other cases, the person managing the PdM program left and no one picked up the ball.

Lack of Consistency
Another component of a failed program is the lack of consistency over time. There are many causes for this, ranging from a failure to commit adequate personnel, lack of proper training, loss of skilled personnel, change in program direction/technology, failure to adequately define the program at the start and, finally, the lack of a consistent model to monitor the efficacy of the program over time. These false starts and stops add confusion to the process and typically result in a lack of faith by the workers who see the company invest in “change”, but then quickly revert back to old patterns.

A lack of consistency over time has the additional ill effects of not allowing the facility to “evolve” to a proactive maintenance mode. As a brief review, there are four levels of maintenance practices: run-to-failure, preventive, predictive and proactive. In run-to-failure programs, facilities adopt a technology, such as vibration analysis, to test or troubleshoot machines they know have problems. Preventive mode refers to maintenance departments that test machines on a schedule much like a preventive maintenance task, but do not act on the information gleaned from these tests. In predictive maintenance mode, one bases maintenance actions on the results of these tests to eliminate unnecessary preventive actions and avoid catastrophic failures.

The next stage in maintenance evolution is the proactive mode, whereby the facility has enough historical information about the machines and their failure modes to make educated decisions on how to extend their lives, replace them with machines of different makes or models or weed out inherent design flaws. To reach these lofty goals and bask in the glory of a highly efficient plant, one needs the backbone of an historically consistent program to lean on.

Looking at these evolutionary stages from a qualitative viewpoint, one will note that a plant in run-to-failure mode will contain machinery in various states of disrepair that seem to fail at random. Personnel in a runto- failure plant will often be “busy” and may think that they are too busy to adopt new procedures! In the preventative mode, one is taking better care of one’s assets and they are failing less frequently. In predictive mode, one should be able to reduce preventive actions where applicable, extend machine life and drastically and reduce unplanned outages. In proactive mode, one will have removed or redesigned troubled machinery and will have a plant that operates smoothly, predictably and efficiently over time. To attain this goal, consistency is required over a long period of time.

Training and Partnering
Ongoing training is an important ingredient of a successful program. However, it needs to be the correct type of training, a combination of complimentary technology and managerial expertise. ISO and ASNT-certified vibration courses focus on machine dynamics and vibrations on a general technical level. It is important to take these courses, pass the exams and become certified, but this training alone will not necessarily translate to running a successful PdM program.

Equipment vendor training is often useful because it requires trainees to learn how to use a data collector and correctly set up software, but oftentimes does not expand outside these topics to provide the user with the tools he or she needs to run a successful program. While learning how to use data collection tools is an essential skill, it defeats the purpose if that same person does not know what to do with the data they’ve collected or how to manage a successful PdM program. One last note to consider about equipment vendor training: once the training has been completed, there is often no one around to ensure employees are using the tool correctly.

Onsite training, database reviews, program audits and choosing the correct long term partner, or PdM service provider, will go a long way to ensuring a successful program. If done correctly, a service partner will provide onsite training and support in managing your ongoing program in different capacities as your program evolves. At different times and in different circumstances, a good partner will take over parts of the program for you and later provide training and support as you bring the program back in-house.

Lack of Procedures / Methodology
As alluded to in the last section, a successful monitoring program is more than just interpreting graphs and data, it depends on consistency and repeatable performance. In general, we are interested in monitoring assets in order to diagnose deteriorating health or other problems. In order to do this correctly and accurately, one needs to test the assets in a repeatable fashion, month after month and year after year for many years. When this is understood, one will see that a successful program depends much more on consistency and program management (unfortunately, this aspect is not often taught in standardized courses) than it does on technical prowess. Another way of stating this is to say that a successful program depends on methodology and organization. A good partner or service provider with a good track record should be able to help you implement a program with tried and true methodologies and manage it for you.

Lack of Experience / Commitment
So far, we have touched on a number of different aspects of successful and unsuccessful programs, and it may be clear that there are a lot of issues involved. This highlights another problem, which is simply a lack of experience and/or commitment by a particular facility. Even if one has the best intentions and the highest level of commitment, it may take a long time to train an employee or group of employees to the point where they can implement a good maintenance program. In the meantime, as they are learning, little may be happening or things may be going in the wrong direction.

More typically, one will see a facility trying to accomplish a great deal without dedicating any money or people to the project or, when they do dedicate one or the other, it is only for a short period of time. Within this window, corporate priorities change, personnel change positions and, subsequently, the program gets shelved. Like many things in today’s world, PdM is becoming a highly specialized area of expertise where, if one wants to gain the depth and expertise currently existing in the market place, it takes a great deal of dedication and time, which , unfortunately, may not be compatible with the other 100 duties you are expected to take care of as part of your other work. This is one reason why partnering or outsourcing has become a viable option for many organizations.

Conclusion
Having gone through this brief exercise, perhaps it is becoming apparent why there are advantages to outsourcing PdM programs. And, while many companies have the expertise in-house to develop and sustain high quality PdM programs, there are also many companies who might benefit more, or at least benefit more quickly, by outsourcing their predictive maintenance programs. It is a decision that each organization needs to explore for themselves.

Service providers understand the context in which their technology is being employed and many have an enormous amount of experience in successfully managing large programs over extended periods of time. They know what is required to make a program succeed and can educate you and your staff on these points. A service provider should maintain a consistent approach over time and be able to maintain the appropriate expertise within their company, in part because their people completely believe in the technology they are employing. They will be experts at utilizing the tools and technology at their disposal, but this should take a backseat to their track record on managing long-term programs. Lastly, a service provider should be able to work with you to benchmark the program and demonstrate its return on investment over time.

Alan Friedman is a senior technical advisor for Azima DLI (www.AzimaDLI.com). With more than 18 years of engineering experience, Friedman has worked with hundreds of industrial facilities worldwide and developed proven best practices for sustainable condition monitoring and predictive maintenance programs. Friedman contributed to the development of Azima DLI’s automated diagnostic system and has produced and taught global CAT II and CAT III equivalent vibration analysis courses. Friedman is a senior instructor at the Mobius Institute, an independent provider of vibration training and certification, and an instructor for the Instituto Mexicano de Mantenimiento Predictivo (Predictive Maintenance Institute of Mexico). He is also the founder of ZenCo, a positive vibrations company.

Why PdM Programs Fail - Part 2

Why PdM Programs Fail:  Misuse of Technology
by Alan Friedman

A very good mechanic knows that you need the right tool for the job, but a common problem with PdM programs is that sometimes people acquire the tool before fully understanding what problem needs to be fixed.  Of course, when you have a hammer all of your problems look like nails, and what follows from this mistaken view is a whole list of reasons why PdM programs fail.  The biggest lesson I learned from engineering school is that the solution to a problem is most often found in its correct definition.  That is, solutions become obvious when you really understand what the problem is.

We laugh when we read the exchange between the tech support person and the new computer owner who calls to say his wireless Internet is not working.  After the tech support person laboriously goes through all of the steps to verify that the hardware and software are all installed and functioning, she asks who the person's Internet service provider is - and, in the pregnant pause that follows, we suddenly know what the real problem is! 

One reason PdM programs fail is because the goals of the program are not well defined or well understood.  A company purchases a technology like a vibration analysis system or infrared camera and then they get trained to use the tool, but not what to use it for.  What they often fail to do is change processes and procedures in the plant to take advantage of the information this new tool provides.  In other words, you buy a screwdriver, you learn how to loosen and tighten screws but you somehow fail to see how this does or doesn't relate to the plant's overall operation.

So, what are the goals of a successful program?  Depending on your background, experience or role in your organization, you may have differing ideas about this, but how you view this will have a large impact on how you employ the technology and on the sorts of benefits you will receive.  It will also ultimately dictate your view of what is the best tool for the job.  To reiterate, I believe that the failure of many PdM programs can be traced back directly to confusion or disagreement on this core question:  what is the goal of the program?  Why are we purchasing this tool (or service), how will we use it and how will we measure our success?  In many cases, the tools are purchased before these questions are answered, if they are ever answered.   In other cases, the benefits one hopes to achieve are not in line with how the technology is actually being employed.

Let's consider two common viewpoints regarding the goals of a vibration analysis program.  One typical view is that vibration analysis is one of the best non-destructive technologies available to detect and diagnose mechanical faults and degradation in rotating machinery.  The goal of using the technology is to detect and diagnose faults in rotating machinery - period.

Another common view is that because vibration analysis can be used to detect wear in rotating machines, one can utilize this machinery condition information to better plan maintenance actions.  This leads to an increase in uptime, quality and plant performance and a decrease in unplanned maintenance, catastrophic failures and accidents.  These benefits, loosely defined as Overall Equipment Effectiveness (OEE), lead to higher profitability.  In this view, the lofty goal of the vibration analysis program is higher plant profitability.

This is the crux of many failed programs.  Perhaps a manager agrees to purchase a vibration monitoring system or a monitoring service.  In his mind, he imagines a 30:1 return on his investment.  Maybe he hasn't thought it completely through, but when he considers the benefits of such a system, his mind leans towards the goal of higher profitability.  He has read plenty of articles about condition monitoring and profitability and he is sold on the idea of it.  Now, a product has been purchased, some technicians and engineers have been given some training, but they understand the goal differently.  They use the equipment to detect problems in their rotating machinery; perhaps they even become quite skilled at it.  But beyond this, no organizational changes have been implemented to schedule maintenance based on vibration test results, nor have metrics been introduced to calculate and measure the impact of the technology on uptime and spare parts and, ultimately, its impact on the bottom line.

From the point of view of the engineers and technicians using the system, it appears successful.  They are able to troubleshoot machines and diagnose problems but imagine what happens when a recession hits and upper management goes around looking for programs to cut.  How will these technicians make the case that their vibration program should be preserved?  Where is the 30:1 ROI?  This is one major cause of terminated PdM programs.  The original idea was to impact the bottom line, but the technology was actually used in a more limited fashion.  The organizational and procedural changes required to utilize machine condition information to meet the goal of higher profitability were not implemented.

Another issue is the tool itself, the actual equipment or service that one purchases.  If we consider the two separate goals mentioned above, it will soon be obvious that the equipment we purchase, and how we use the equipment, will vary based on our goal.  Again, I will reiterate that most people purchase the equipment first and never fully reconcile the goal.

Here is a common scenario that describes a plant using vibration analysis to troubleshoot machines and determine what is wrong with them.  The plant either has a vibration expert on-site or uses an outside consultant.  Typically, someone hears a weird noise coming from a machine or they feel that the machine is vibrating too much.  Maybe the machine keeps failing unexpectedly or seems to have more problems than a similar unit.  Whatever it is, someone in the maintenance department believes there is a problem, and so they call the vibe guy to troubleshoot it.

The on-site expert or consultant will require customizable high tech equipment that allows him to set up a variety of special tests to troubleshoot the machine.  The data collection equipment may have a big screen because the analyst will do a lot of his analysis on the plant floor.  The equipment may also have many channels and it will likely be complex and difficult to use.  Because there is no historical data, the focus will not be on trending or looking for changes over time, therefore, his equipment will not require any advanced alarming or trending capabilities.  It would not be uncommon to expect the analyst to spend multiple hours or even multiple days in some cases, diagnosing the problem and submitting his report.  This would most likely be a costly, but hopefully, infrequent expense.

Summary Scenario #1

Data collector needs:
   •  Big screen
   •  Many test types
   •  Customizable, multi-channel, magnet mounted sensors
   •  Intelligence in the analyzer

Does not need:
   •  Alarming
   •  Trending
   •  Reporting
   •  Intelligent software

Analyst:
   •  Highly trained
   •  Highly paid
   •  Experienced

Program manager:
   •  Not much program management required

Now let's consider that the goal of the program is to use the technology to better plan maintenance, ultimately leading to a measurable impact on plant profitability.  What type of equipment will be best suited to meet this goal?

In this next scenario, the emphasis is placed on trending because the goal is to look for changes in machine condition and then base maintenance decisions on this information.  Time is spent up front defining standard test conditions and organizing the program.  This scenario calls for a low cost, efficient worker to collect data in exactly the same way, day in and day out, year after year on the same equipment.  The data collection equipment would be "idiot proof" with limited or controlled options for the user, or it may be an online system.  Test points on the machine would be screw type sensor pads or installed targets for magnet mounts to insure repeatability.  Initiation of a standard test should take no more than a button press.  Because the data collection tasks, including the required equipment, have been defined in such a way as to ensure repeatable, relevant and historical data, there is no reason for the person collecting the data to look at or analyze the data on the plant floor.  This eliminates the need for the data collector's big screen.

The software will have to be very good at looking at trend data in an efficient way because this scenario also calls for testing most of the plant's machines frequently, not only machines with known problems.  Therefore, the analysis software will require the sophistication, not the data collector.  There won't be time (or need) for an analyst to spend multiple hours looking at data from each machine; a couple of minutes will be enough to see if the condition has changed, a couple more will be needed to understand how it's changed and to update the status and add a recommendation in the software.  Additionally, because trends based on good data should provide enough information to meet the goals of this scenario, the data collector will not require the capability to perform advanced customized tests, nor will the technician collecting the data require much training.

Lastly, since this scenario is concerned with improving maintenance decisions and relating them to the bottom line, the software should be part of a larger CMMS package or Plant Asset Management program.  Linking results to business goals such as improvements in uptime, quality and plant performance allow maintenance managers to accurately quantify their impact on profitability.

Summary Scenario #2

Data collector:
   •  Easy to use
   •  Human error proof
   •  Simple, standard tests or online system

Data collector doesn't need:
   •  Big screen
   •  Complex customized tests

Sensor:
   •  Triaxial sensor and stud mount

Software:
   •  Intelligent software
   •  Good alarming
   •  Trending and reporting features
   •  Links to CMMS and asset management software
   •  Metrics calculated from maintenance decisions up to plant profitability

User:
   •  Data collection technician
   •  Low skill
   •  Low wage

Program manager:
   •  High skill
   •  High wage

As you can see, the way we define the goal has a big impact on the type of equipment we will purchase and how this equipment is used.  It also points to a common reason why PdM programs fail.  People often buy the equipment with the most bells and whistles first, with little to no attention on the software and no idea how the monitoring program will be organized.  This is to say they buy the equipment defined in the first scenario with a vague idea that they will receive the rewards of using it as described in the second scenario.  They focus more on the tool than on program management.  When they receive training from the equipment vendor, it is often training in how to use the tool, not what to use the tool for.  People who fall into this trap will typically say that they only test "critical" machines, not understanding that they are doing this because they bought equipment that was not designed to test large numbers of machines efficiently.

Now let's return to the original question: Why do PdM programs fail?  One reason that I hope is clear by now is the possible confusion between condition monitoring tools and their accompanying goals.  The most common stumbling blocks are in understanding what the business goals are, employing the right tools, people and processes to meet those goals and establishing metrics to show how effective the program is in reaching the goals.  Often times, plants employ highly trained individuals to use complex equipment solely to troubleshoot machines that are already known to be problematic.  This may be a valid use of the technology, but it is not PdM and does not bring the same rewards or ROI.  If you begins with the stated goal of increasing profitability and work down the ladder from there, equipment purchases and the way these tools are employed will be very different and the profitability goal will be better realized.

Alan Friedman is a senior technical advisor for Azima DLI (www.AzimaDLI.com).  With more than 18 years of engineering experience, Friedman has worked with hundreds of industrial facilities worldwide and developed proven best practices for sustainable condition monitoring and predictive maintenance programs.  Friedman contributed to the development of Azima DLI's automated diagnostic system and has produced and taught global CAT II and CAT III equivalent vibration analysis courses.  Friedman is a senior instructor at the Mobius Institute, an independent provider of vibration training and certification, and an instructor for the Instituto Mexicano de Mantenimiento Predictivo (Predictive Maintenance Institute of Mexico).  He is also the founder of ZenCo, a positive vibrations company.  You can contact Alan at 206-327-3332 or at friedmanalan1@gmail.com

Thursday, July 8, 2010

Why PdM Programs Fail - Part 1


Why PdM Programs Fail:  Personnel Issues
By Alan Friedman

Many facilities and enterprises have failed to achieve the 10:1, 20:1 or even 30:1 Return on Investment (ROI) often promised with the introduction of a Predictive Maintenance (PdM) program.  Investments have been made in monitoring equipment and training but, unfortunately in many instances, data collectors are now collecting dust on a shelf in some storeroom waiting for someone to rediscover them.  And perhaps the discoverer will wonder what these artifacts may have been used for.  Meanwhile, on the factory floor, it is back to business as usual with unplanned outages as the norm, with everyone too busy fighting fires to get a handle on the situation.  Well, at least it's an exciting place to work!

This article will focus primarily on the personnel aspects of how a PdM program could potentially fail.  Let's start from scratch, pretending that we have no PdM program and we want to start one now.  This brings us to the first problem: how many times have we had to pretend that we had no program and now we are starting from scratch all over again - maybe with new equipment this time around - because the guy who used to run the program left for greener pastures and took everything with him except for a squarish-looking electronic device with some cables and a sensor hanging off of it?  If we are honest, most companies have probably given the PdM program thing at least one try.

Retention

Retention of highly trained personnel can be a problem.  While many are retiring, others are either promoted or make lateral moves to other companies.  The impact of these moves is especially devastating when individuals do not formalize their work into processes and procedures that other people can be trained to follow when they leave.  Unfortunately, many workers like to be "experts" and protect their position by shrouding their work in mystery, holding onto the secrets of their expertise to ensure that the company remains dependent on them.  Others may be less devious or insecure, but simply don't think ahead.  In other words, they don't establish procedures so the company can keep the program running in their eventual absence.  In either case, we can say for certain that the loss of the resident expert is often enough to doom a PdM program, and banish its high tech equipment to the unreachable parts of the highest shelves.

The lesson here is that you should catalog work procedures and processes now.  Formalizing procedures is one of the best steps you can take to not only enhance the effectiveness of your program, but also to institutionalize it, so that the program becomes bigger than one person, or even a handful of people.  It can then survive the loss of key personnel.

Training

Let's say that we are going to give it another try.  How long will it take to train the new resident expert, or experts, to the point where they have a handle on the technology and can effectively manage a PdM program?  One year?  Two?   Five years?

Here is another very important question to ponder.  Will we view PdM responsibilities as a full-time position or just something "extra" that has to be done after the "real" work is complete?  Will this person's manager give them the time, training and equipment necessary to make them successful, or will the PdM program be seen as just another responsibility added to an already busy schedule?  Remember, when a plant is operating without an effective PdM program, unplanned failures and a general lack of knowledge about the condition of the plants' assets are a given.  Therefore, maintenance people are constantly operating in "firefighter" mode to fix the next emerging fire.

In this situation, it is difficult to step back and put together a strategy to move up the maintenance evolutionary ladder to the rung of PdM.  In order to step back and do this, the person we appoint to help with this process (a.k.a. our new PdM expert-in-training), needs to be given the time, space and support to make the transition happen, which shouldn't be expected to happen overnight.

Strategic Direction

One last item worth mentioning is the problem of abrupt changes in strategic direction.  I have seen successful programs uprooted by managers who, when initially hired, appear on the scene with no knowledge of PdM and do one of two things.  They either fire their staff that is responsible for these tasks or they don't give the staff the time or permission to continue working on their programs. To be sure, this problem is more common in circumstances where the people running the PdM program have not adequately documented the efficacy of their work (i.e. they do not have the evidence handy to make a case for why the plant is better off keeping these programs in place).

Trends

In recent years, we have seen a shift in the PdM industry towards outsourcing PdM activities to companies who have a long track record of successfully managing these sorts of programs as well as the technical expertise to solve difficult problems.  Some reasons for this shift has been touched on in this article; namely the difficulty a facility can have in hiring, training and retaining individuals who have the depth of experience needed to turn the advertised potential ROI PdM can provide into real results and real money.  Even those facilities that have seen substantial gains in evolving their maintenance efforts from Reactive Maintenance to PdM may abruptly devolve back into firefighter maintenance mode with the loss of a key expert or because of a change in direction taken by a manager unfamiliar with the benefits of PdM.

One solution to these common problems is to team up with a well-established service provider who takes on the responsibility for keeping the program consistent year after year.  A quality strategic partner will have the necessary expertise, not only with the PdM technologies, but also in knowing how to strategically deploy them so that they positively affect the company's bottom line.

Alan Friedman is a senior technical advisor for Azima DLI (www.AzimaDLI.com).  With more than 18 years of engineering experience, Friedman has worked with hundreds of industrial facilities worldwide and developed proven best practices for sustainable condition monitoring and predictive maintenance programs.  Friedman contributed to the development of Azima DLI's automated diagnostic system and has produced and taught global CAT II and CAT III equivalent vibration analysis courses.  Friedman is a senior instructor at the Mobius Institute, an independent provider of vibration training and certification, and an instructor for the Instituto Mexicano de Mantenimiento Predictivo (Predictive Maintenance Institute of Mexico).  He is also the founder of ZenCo, a positive vibrations company.  You can contact Alan at 206-327-3332 or at friedmanalan1@gmail.com

Thermocouples in Furnaces and Ovens






The temperature on the inside of furnaces and ovens are commonly monitored and controlled by thermocouples inserted into the heated chamber.  The one common feature of all furnaces is the fact that there are isotherms within the heated chamber.  Isotherms are regions of equal temperatures.  They are similar to the contour lines on a map illustrating areas of equal altitude. Isotherms are caused by gradients within the furnace created by uneven heating, inadequate circulation, uneven distribution of the workload within the furnace, etc.  There are also isotherms within the wall of the furnace since the outside surface temperature of the wall is close to ambient temperature and the inside surface temperature of the wall may be 3,000 degrees Fahrenheit or more.

Thermocouples are installed in the furnace by machining an appropriate hole through the wall.  The thermocouple now cuts across a myriad of isotherms and creates a conductive path for heat to flow from the hot area to the cool area. The thermocouple (since it measures its own temperature) is constantly being cooled by this conduction.  The end result is the output of the thermocouple is always in equilibrium with the heat coming into the junction and the heat being carried away by conduction via the thermowell to the outer jacket of your furnace and the atmosphere.  We call this error the “Stem Effect.”  This error is influenced by the heat conduction in the wires, insulation and the sheath or thermowell of the thermocouple. It is virtually impossible to predict the magnitude of this error.  Even if you could determine this error at a particular temperature, it would change as the temperature changes since the thermal conductivity of all materials varies with temperature.

The basic physical principles involved are as follows:

  1. Heat is always exchanged between two objects at different temperatures and it always flows from the hotter to the cooler object and.
  2. Heat cannot be exchanged between two objects at the same temperature.

Our objective is to design and install a thermocouple into a furnace so that the thermocouple sensing tip is always in equilibrium with the temperature of interest and therefore accurate measurements are being made.

In order to minimize the “Stem Effect” error one must install the thermocouple parallel to the plane of heat flow for a distance of at least 20 diameters of the probe.  If you use an 1/8” OD probe, then 2-1/2” inches of the tip of this probe should be located parallel to the plane of heat flow.

Accuracy of temperature measurements made within furnaces can be greatly improved if the temperature sensors are installed parallel to the isotherms for a distance equal to 20 times the diameter of the protection sheath.  This will reduce the “Stem Effect” (error caused by conduction in the sheath, wires and insulation) to an insignificant amount.

Thermocouple Standards and Calibrations

Thermocouple Standards

As shown in Table I, there are seven thermocouple standards applicable to heat-treating furnaces.  These include Reference Standard, Primary Standard, Secondary Standard, Temperature Uniformity Test Standard, System Accuracy Test plus Working and Load Standards.  This chart summarizes the type of thermocouples, which can be used for each category in addition to calibration frequency and accuracy requirements.  These standards were established by SAE (Society of Automotive Engineers) specification SAE-AMS-2750 Rev. C in 1990.  This specification has been adopted by the U.S. Department of Defense.  It is a valuable reference and we suggest that anyone who manufactures or uses heat-treating furnaces should have a copy of this specification in their Quality Control Department.

Table I
Outline of Sensors
Nomenclature
Description
Calibration
Use/Max Error Limit
Period
Against
Correction Factor (°F)
Reference
Standard
Platinum
Platinum-Rhodium
5 years
NIST
Reference Standard
Primary Standard Calibration
None
Primary
Standard
Platinum
Platinum-Rhodium
3 years
Reference Standard
Secondary Standard Calibration
±2.7° or ±0.25%**
Secondary Standard
Base or noble metal
1 year base
2 years noble
Primary Standard
Test Sensor Calibration
Base: ±2° or ±0.4%**
Noble: ±2.7° or ±0.25%**
Temperature Uniformity Test
Base of noble metal
3 months base
6 months noble
Primary or
Secondary Standard
Temperature Uniformity Tests
±4° or ±0.75%**
System
Accuracy Test
Base of noble metal
3 months base
6 months noble
Primary or
Secondary Standard
System Accuracy Tests
±2° or ±0.4%**
Working
Base of noble metal
Before installation
Primary or
Secondary Standard
Installation in Equipment
Class 1: ±2° or ±0.4%**
Class 2: ±4° or ±0.75%**
Load
Base of noble metal
3 months N, R, S
6 months other
Primary or
Secondary Standard
Insertion in Loads
±4° or ±0.75%**
*   Sensors of Equivalent or Greater Accuracy are Acceptable
**   Percent of Reading, if Greater Than Correction Factor in Degrees
Aerospace Material Specification - SAE AMS-2750 Rev. C. issued 1980-04-15, Revised 1990-04-01
Superceding AMS-2750B. Society of Automotive Engineers, Inc., 400 Commonwealth Drive, Warrendale, PA 15096 (1990).

Calibration Services

Nanmac’s calibration laboratory will calibrate bare or insulated thermocouple wire, assembled thermocouples, RTD’s, thermistors and instruments.  All of our calibration equipment is calibrated against National Institute of Standards and Technology (NIST) standards.  Our calibration data are traceable to NIST standards.  Calibration costs are listed in the chart below.  The maximum temperature range of our standard services is 2,100 degrees Fahrenheit.
Notes:

All temperature sensors must be at least 12 inches long to minimize conduction errors.Calibrations to 2,950°F can be made on a special basis (contact factory for details).  Also, calibrations at cryogenic ranges can also be made on a special basis. Your instruments and sensors can also be calibrated and certified (contact factory for details).