On PID control
ErNa
Posts: 1,752
While control loops are in the focus of engineers, physicists under normal conditions don't touch this issue. So what follows may look a little strange.
To start with, I wish to look for the most simple case: have a P control.
What is a controller good for? If something goes wrong, the idea is, to make it better. As this is very general, I want to be more destinct and introduce a simple example.
Imagine somethings (an object) exists, then, if not a ghost, it is in a place. This place now might be the place, I want the object to be. But, as there are many places, the probability that the object is located else is huge. We introduce the distance between the wanted place (set-value) and the actual place (is-value) as the "error" we want to minimize.
To correct the error we need an action. Like: apply to the object a directed force. But: how will the force change the place? Changing a place we call movement and movement is characterized by velocity. So how is the application of force related to that velocity? Our gut feeling says: the more force, the more speed we will gain, the faster the object reaches the set value of place. But from experience we know, an object has a property of mass. So acceleration and speed is not only related to force, but also to mass. An object without mass can not be controlled, as the minimum force will result in infinite speed.
For this reason, an object that has to be controlled MUST have a mass and so inertia.
Now the term "the faster the object reaches the set value of place" shows ambiguity. Two meanings: reach set position in short time AND at high velocity. So we do not reach the set value, but pass that value and introduce a new error.
A p-controller by definition creates a force, that is proportional to the error. If the initial condition is: no error, no force is created and nothing happens. But it there is an initial error (in a positive direction), there is a negative force, the object starts to move and gains kinetic energy. As long as the positive error exists, there is force and kinetic energy increases. That means: the moment the set position is reached, the speed is at maximum and the next moment we will have a negative error and generate a positive force which now decelerates the object. The moment the object is at stand still, the error will be the initial error but in negative direction. Now the game starts from the beginning.
What we see here is: a P controller in the case of a mass with inertia is nothing but a harmonic oscillator and without damping will oscillate forever if once excited.
That is, where the "D" steps in.
If there is no error at start-up, the system is at rest in peace. Now we establish an error by creating a displacement. That means, the error changes so the derivative is large. To correct the error we apply a force propertional to the error (the "P"component) and a second force proportional to the change rate of the error ( the "D" component). That means: with a D, the back driving force is higher and the error gets smaller. But now the derivative becomes negative and the backdriving force is reduced. When the compents P and D are properly adjusted, the D part over compensates the P part, that means, when returning to the set position, the object is decelerated and comes to stand still at the set position.
But what, if the D compensates oscillation, but the error doesn't come to zero? In this case P creates a back driving force, but there is no movement. That only can be the case, if there is another force of value -P, that has an unknown (external) origine. So, if a PD controller comes to stand still at an error position it actually is a weigh that measures an external force. To compensate this force we have to apply a third current, we call the "I" component.
Now we have a PID controller completed: The D components dampends the movement by controlled extraction of kinetic energy from the object. The P component brings back the object to the set position, but if there is still an error, this means the existance of an external force and this force is proportional to the error and compensated by the I component.
To start with, I wish to look for the most simple case: have a P control.
What is a controller good for? If something goes wrong, the idea is, to make it better. As this is very general, I want to be more destinct and introduce a simple example.
Imagine somethings (an object) exists, then, if not a ghost, it is in a place. This place now might be the place, I want the object to be. But, as there are many places, the probability that the object is located else is huge. We introduce the distance between the wanted place (set-value) and the actual place (is-value) as the "error" we want to minimize.
To correct the error we need an action. Like: apply to the object a directed force. But: how will the force change the place? Changing a place we call movement and movement is characterized by velocity. So how is the application of force related to that velocity? Our gut feeling says: the more force, the more speed we will gain, the faster the object reaches the set value of place. But from experience we know, an object has a property of mass. So acceleration and speed is not only related to force, but also to mass. An object without mass can not be controlled, as the minimum force will result in infinite speed.
For this reason, an object that has to be controlled MUST have a mass and so inertia.
Now the term "the faster the object reaches the set value of place" shows ambiguity. Two meanings: reach set position in short time AND at high velocity. So we do not reach the set value, but pass that value and introduce a new error.
A p-controller by definition creates a force, that is proportional to the error. If the initial condition is: no error, no force is created and nothing happens. But it there is an initial error (in a positive direction), there is a negative force, the object starts to move and gains kinetic energy. As long as the positive error exists, there is force and kinetic energy increases. That means: the moment the set position is reached, the speed is at maximum and the next moment we will have a negative error and generate a positive force which now decelerates the object. The moment the object is at stand still, the error will be the initial error but in negative direction. Now the game starts from the beginning.
What we see here is: a P controller in the case of a mass with inertia is nothing but a harmonic oscillator and without damping will oscillate forever if once excited.
That is, where the "D" steps in.
If there is no error at start-up, the system is at rest in peace. Now we establish an error by creating a displacement. That means, the error changes so the derivative is large. To correct the error we apply a force propertional to the error (the "P"component) and a second force proportional to the change rate of the error ( the "D" component). That means: with a D, the back driving force is higher and the error gets smaller. But now the derivative becomes negative and the backdriving force is reduced. When the compents P and D are properly adjusted, the D part over compensates the P part, that means, when returning to the set position, the object is decelerated and comes to stand still at the set position.
But what, if the D compensates oscillation, but the error doesn't come to zero? In this case P creates a back driving force, but there is no movement. That only can be the case, if there is another force of value -P, that has an unknown (external) origine. So, if a PD controller comes to stand still at an error position it actually is a weigh that measures an external force. To compensate this force we have to apply a third current, we call the "I" component.
Now we have a PID controller completed: The D components dampends the movement by controlled extraction of kinetic energy from the object. The P component brings back the object to the set position, but if there is still an error, this means the existance of an external force and this force is proportional to the error and compensated by the I component.
Comments
Possible trivia: I've come to the conclusion that the three P, I and D terms are effectively a 3rd order filter.
And using multiple transducers often seems to me a method of over-engineering: a lot helps a lot. Mostly not true.
What I wrote up, is just a theoretical approach. Some experiments will follow (hopefully soon)
Over-engineering is a hallmark of industrial automation.
Not at all. In an application where some form of backlash is inevitable, the motor mounted encoder cannot be relied upon for accurate positioning of the load. Mounting the single feedback device on the load instead, now creates the problem of loop instability, thanks to the mechanical lost motion.
But with this discussion I want to focus on: how to understand PID control in terms of real world.
I showed (not in depth) why a P control is nothing but an oscillator. And only side effects like friction dampen oscillation. But if there is oscillation we can determine the frequency and so there is a chance to exactly damp this frequency. And if e.g. load changes, frequency will change and still oscillation can be damped and as a side effect, load is determined.
As you are no doubt aware, REAL robots and REAL machining centers don't rely on open-loop stepper motors. As far as I am concerned, no feedback = no (verifiable) control.
Cheers
Sorry, you have lost me on your pendulum-connected-to-a-motor.
I cannot understand what you mean.
Please add a sketch or a diagram.
Many thanks.
Physicists do indeed look at such problems. For example they can build you a mathematical model of a triple inverted pendulum and then derive a control algorithm to keep it upright.
Any way, yes, a physicist like to boil things down to the simplest case, like ignore friction, ignore latency in the control loop, ignore backlash and dead zones, ignore noise, etc etc.
In that way you can probably derive perfect control strategies for many simple cases.
In the real world, we have friction, backlash, latency, noise etc all of which is hard to measure and hard to build a mathematical model for.
The end result is we often throw a PID control loop in and then tune the thing manually as best we can.
Or we might use fuzzy logic, handy when the system to be controlled is hard to analyse rigorously.
Oddly I was thinking about all this today. I just ripped an old hard drive apart. So now I have the head actuator to play with, nice arm with a coil at one end sitting between two super strong magnets. Turning on a nice smooth bearing. I was wondering how I would build a control system to position it.
Problem is that many systems are not simple to model. And hence not simple to derive control strategies for.
For example, as far as I'm told by people that study these things, it's not possible to balance a two wheel bot using a PID.
Turns out that people do that, perhaps with some hack or other, anyway. Without getting deeply into the maths of the thing.
I'm not sure where the stepper motors fit in here.
The motion trajectory generator simply increments the command position in a time-sliced fashion which inherently takes care of both velocity and position control. The PID settings are responsible for the motor following the motion trajectory in a tight and stable fashion.
Digital Motion Control has been this way since Dr. Jacob Tal (Galil Motion Control, Rocklin, CA) developed the first DMC chip in the 1980's.
Most, or at least the majority of the temperature control loops I have come across were using a PID loop or at least something that looked very much like the same algorithm of a servo control loop. The major difference I could see was was due to the increasing difference in temperature between ambient temperature and the heated sample or area. Can you post an example of both that shows the difference?
1) Huge latency in the control loop due to thermal inertia. If nothing else that makes it time consuming and tedious to tune the parameters.
2) Control is asymmetrical. If you are controlling servo position, say, then you get a position error that can be both negative and positive, then you can drive the actuator negatively and positively. But when controlling temperature often you can only pump heat in if the temp is low but if the temp is too high you can't suck heat out, you just have to let it cool down in it's own good time. The rate at which it cools of course depends on the current ambient temperature. Basically the system is very non linear.
The temperature control routines I wrote myself I just used the same algorithm as I've always used for motor control and those were easy to tune, but that doesn't seem to be the norm for temperature control.
Controlling temperature is a more complicated proposition than controlling servo position because of those two factors. Add to that the variety of physical systems (furnace, oven, building, etc.) as well as the number of methods to add energy and it can be quite a task. Adding the ability to drive temperature negatively adds to the problem since energy costs need to be taken into account.
Since all those terms use the difference between the set point and the desired position/temperature they do affect the output of the loop.
I suppose it could be done that way but it would involve more complicated calculations than a pid loop, particularly for the most common task of heating a building.
1) You want to control the position of an object from left to right horizontally. OK, wrap a PID loop around it and see what you can do.
2) But now we turn everything through 90 degrees so the position is up/down rather than left/right. Now you have a constant force of gravity acting on it. That means that in the stable, zero error, situation you have to be applying an upward force that exactly counters the force of gravity. Well, perhaps the PID can sort that out but it might be an obvious idea to just apply that upward force constantly thus giving the object "neutral buoyancy" as it were. Then let the PID work on that.
3) But what if this system is sometimes horizontal and some times vertical and whatever in between? Perhaps it's an idea to measure the inclination and adjust that force required to achieve neutral buoyancy.
I really start to think I have to build some actuator system and play with this.
I once had a good friend who was seriously into control systems engineering, he designed the active suspension control system for the Williams F1 cars when they were winning all the time. He was always on about "poles" and "zeros" whenever we got together for a beer. Sadly I never understood what he was talking about!
To compensate for integrator windup, my simplistic approach has been to disable changes to the integral term when the temperature is far from the set point. There are probably better ways to do it, however.
As far as I know, most house heatings are controlled manually and roi of automatic heating control is very poor ;-)