Linear Position Sensor FAQ

What is a Linear Position Sensor?

An Overview of "Linear Position Sensors", How They Work, and How to Choose the Right Sensor

To understand what a linear position sensor is, consider that a "sensor" is an electromechanical device that measures a physical parameter such as position, temperature, pressure, or flow and provides an output in some electronic format. This output may be a proportional analog signal such as a DC voltage or current, or a digital signal that is connected to a digital display or sent over a network to be interpreted by a computer or PLC. In some cases it may be merely a state change; in effect, it is an "on" or "off" switching signal sent to a control system computer.

"Position" implies measuring the location of an object that is movable in relation to some other point or object. Thus, an example of position can be the dimension of a manufactured part, the extension of a hydraulic actuator, the location of a cutting tool on a machine, or the opening of a steam valve in a power plant based on the position of the valve stem relative to the valve body.

 View Our Linear Position Sensors Here

The "linear" part of the description means measuring the distance in a straight line or in a single axis to a point or surface or object in relation to some reference point, as opposed to the angular motion of a rotating shaft.

There are at least a dozen technologies for measuring linear position, some without any actual contact with the object being observed and others that require some movable element to be in physical contact with the object. An example of the former is proximity sensors that indicate whether an object or target is nearby or not by a state change, which is more like a qualitative measurement. However, a very large number of linear position sensors provide a quantitative, proportional electrical output, and a majority of them require physical contact with the object.

But another variable in a linear position measurement is that it can be incremental in relation to some "home" position, or absolute as a stand-alone measurement. Thus, the nature of linear position measurement is typically a function of the sensor's underlying technology. This often affects the choice of the sensor to be employed for any particular linear position measurement.

Lots of sensor technologies are based on changes in the device's electrical properties such as inductance, resistance, capacitance, magnetic fields, or resonant frequency due to a movable element like a rod or shaft. Several of the non-contacting sensors rely on measuring the time for a wave to be directed toward the object being measured and then reflected back to the source.

Most sensing technologies for linear position measurement have several important technical attributes that need to be studied in choosing the optimum sensor for any particular measurement application, including:

 

 Range: What is the range of motion or position measurement for which the sensor is optimized.

 Accuracy: What is the required precision and confidence level in the measurement.

 Resolution: What is the smallest change in position input that the sensor can measure.

 Environment: Under what temperature, vibration, shock, EMI fields, etc. will the sensor operate.

 Reliability/Life: What is the projected life of the sensor and how often must it be recalibrated or replaced.

 Installed Costs: Which includes purchase price but also setup, calibration, operator training, spares, etc.

In terms of sensor range, the measurement may be for a change in position of what may be called a "macro" position measurement, typically covering from several inches or centimeters up to many feet or meters, or it may be for small motions, often known as "micro" position measurements, usually for low fractions of an inch or millimeters. Examples of the latter would be strain measurement in materials under load, the position of a valve spool for feedback stabilization of a fluid power system, or the expansion or contraction of a mechanical part with ambient temperature change. For an example of a macro position measurement, consider the fairly precise spacing between the rails on a railroad track bed, or the spacing between vehicles in a self-driving car.

Clearly, a sensor with the ability to measure position or dimensional changes down to a few thousandths of an inch is not needed for the vehicle spacing or rail gaging. So the issues of resolution and accuracy are not as high a priority as reliability and environmental robustness for such a sensor application. On the other hand, the superior resolution and high accuracy of the sensor would be very important for instruments making quality assurance measurements in a manufacturing plant for checking parts or controlling machines, particularly in places where the environment can be more readily controlled as well.

These examples illustrate the various technical features and specifications which are lumped into the category of sensor performance parameters that need to be considered when choosing the proper sensor technology for position measurements. These parameters need to be balanced against the application's economic issues, which are wrapped up in the installed costs. This is a very individualized consideration for any application and the balancing is often referred to as evaluating the price-to-performance ratio of the proposed sensing system for the required position measurement. The process can be made somewhat easier by identifying those top-tier requirements that have the highest priority and focusing more attention on them to avoid being distracted or overwhelmed by lower priority issues.

This treatise is provided to assist potential linear position sensor users or specifiers in getting an overall look at the field of position sensing and the technologies available and ultimately helping them to better understand the issues involved in making a sensor choice. Also, experienced applications engineers employed by potential suppliers can offer considerable help as well, so it pays to inquire about a prospective supplier's support staff.

We have real world experience and know how to solve your measurement challenges. Need technical help with your measurement application? Call us today 856-727-0250 or send us a message by clicking here.

What is Sensor Linearity and What Does It Mean?

Most analog output sensors have general specifications such as linearity (or non-linearity), repeatability, and resolution, as well as environmental specifications like operating temperature or shock and vibration, and dynamic specifications like response or bandwidth. All of these specifications represent limits of error or sources of uncertainty related to the sensor's output compared to its input. Many of these terms are fairly easy to understand by their wording alone, but linearity error or non-linearity is not in that category.

Definition of Linearity Error or Non-linearity

Linearity, or more correctly, non-linearity, is a measure of the maximum deviation of the output of any sensor from a specified straight line applied to the plot of the data points of the sensor's analog output versus the input parameter being sensed, which is called the measurand, under constant environmental conditions. The more linear the sensor's output, the easier it is to calibrate and to minimize uncertainty in its output scaling. However, understanding a sensor's non-linearity specification requires understanding the nature of the reference straight line.

Reference Straight Line

There are several possible reference straight lines that could be utilized to express a sensor's linearity error. The optimum choice based on statistics would be a "best fit line". But just what is the criterion for "best fit"? Both experience and statistics favor a line calculated by the "method of least squares", whereby the sum of the squares of the deviations from the desired line is mathematically minimized. Such a best fit straight line (BFSL) is broadly used as a basis for a sensor's linearity error or non-linearity, not merely because it is statistically appropriate but also because it has been validated in real world measurements.

Impact of Other Errors

Because the linearity error applies to the analog output of the sensing system, recognition must be given to other errors that can affect the output besides sensor non-linearity. To fully comprehend what the linearity error specification actually means, there are several pre-conditions that must apply to the measurement process. First, environmental factors like ambient temperature must be reasonably constant or small changes compared to the linearity error.  Next, the repeatability and hysteresis errors in the sensor itself must also be small compared to its linearity error. Third, any non-linearity in the system output caused by ancillary electronics in the measuring system must also be very small compared to a sensor's linearity error. And finally, the resolution of both the sensor and the output reading instrument must be sufficient to react to the small deviations in output caused by linearity error.

Why Worry About Other Errors

Measurement errors cannot simply be added together arithmetically, but are correctly combined by a Root-Sum-Squares (RSS) calculation. So only if these other errors are small will linearity error be the dominant source of measurement uncertainty. Otherwise, the weighting effect of the other errors can lead to serious uncertainties about the measurement results. This is also one of the reasons that trying to measure linearity error is more complicated than it might seem. Not only must there be the ability to minimize the effects of ambient factors like temperature and humidity, but it is important to note that sensor linearity error needs to be measured with equipment having at least ten times the desired precision of the linearity error itself, which usually means highly precise equipment normally found only in metrological calibration or national standards laboratories.

 

How Linearity Error is Specified

The maximum linearity error using a BFSL reference for a unipolar output sensor is usually expressed as a (±) percentage of Full Scale Output or Full Span Output (FSO). For a bipolar output sensor, its maximum linearity error is expressed as a (±) percentage of Full Range Output (FRO), i.e., from (-) FSO to (+) FSO.

Example

To illustrate the effects of linearity error, consider a sensor with a range of 0 to 2 inches, an output of 0 to 10 V DC, and its linearity error specified as ±0.25% of FSO. The sensor has a scale factor of 5 Volts per inch and an FSO of 10 V DC, so non-linearity could cause an error of ±25 mV in the output, which is equivalent to an error of ±0.005 inches. The user must then decide whether this level of error is tolerable. This is illustrated by Figure 1 below, which shows both the sensor's analog output in blue and its point-by-point error from the reference line in orange. Keep in mind that the units of the error are so much smaller than the unit of output that if shown along the blue line they would be indiscernible in terms of resolution.

Linearity

  

Figure 1

To summarize:

 

1. Linearity error is referenced to a Best Fit Straight Line calculated by the least squares method.

2. Low sensor linearity error increases measurement precision and eases sensing system calibration.

3. Errors due to a sensor's temperature, repeatability, hysteresis, and resolution can affect output linearity.

4. Sensor errors do not simply add up arithmetically but must be combined by a Root-Sum-Squares calculation.

5. A sensor's calibration equipment must be a minimum of ten times better than the measurement precision desired.