# Effect of loop timing variations

The loop iteration time is a topic that is discussed frequently on this forum, mostly from the perspective that ‘more iterations means better control’, sometimes even from the opposite perspective. However what I wonder about is the effect of the stability of the loop timing on the control algorithm. I don’t think I’ve seen much discussion about this, but what I do see that things causing irregular iteration times (such as doing motor.move() once per so many iterations, or running the monitoring once per so many iterations (which has huge impact)) are very common. However the library also implements PID loops in several places. And as far as my understanding goes PID loops are time-sensitive things (when I or D are used) - when an iteration repeats 10% faster then the effect of the I will be ~10% more while the effect of the D would be ~10% less. Aren’t we sabotaging our control algorithms by allowing such variations in loop time?

Yes. That’s why you need as fast MCU as \$\$\$ possible.

I would not call it sabotage, it’s a reasonable trade-off.

Wouldn’t the effect potentially be even worse with a faster MCU? As the difference between a fast iteration and a slow (io-blocked) iteration could be even bigger.

No because you update the values a lot faster. The key here is to update the values faster than the motor spins. Beyond certain RPM the update rate doesn’t matter anymore. You are always “ahead of the curve”, so it’s just wasted computing cycles.

PS By updating the values faster than the motor spins I mean updating the values faster than d2a/dt^2 (second derivative). There is a dt cutoff where the updates in the electromagnetic field cannot catch up with the changes in the physical system, and the algorithm thinks the motor is at angle a but the motor is actually at angle a+da where da is so big you start experiencing “cogging”, “detents”, “grinding” etc people call it colloquially.

I’m lying to you a little but that’s to simplify the problem to explain better. There are beautiful youtube videos that do much better job in visualizing it than my crude explanation.

The answer to your questions just comes down to having to re-tune the gains of the controller. Different sample time will of course result in different gain values.

I am always happy to hear about colntrol theory questions, thanks @daniel for starting the discussion.

To answer to your question we need to separate two timing influences on the foc control

• influence on setting the field vector `motor.loopFOC()`
• influence on the motion control `motor.move()`

For the field vector setting process it is far more important to have the calls as fast as possible than to have a certain periodicity and whenever we do have a longer time in between two calls we will have a drop in performance. In the precision of the field vector setting.

Motion control as you’ve alredy introduced is time sensitive and powered by the control theory we try to analyse and prove the stability (maybe even guarantee certain performance) in the continuous time domain. When we discretise the system, we are usually using a fixed sample time, mostly because it is easier to analyse and the drop in performance is potentially easier to compensate for. And if we were to use a fixed sample time for the dicretisation of the low pas filters and the PIDs we would have huge influence for the cases when the loop time grows higher.

SimpleFOClibrary’s approach actually is based on a fact that every MCU is different and I have written in in a way to avoid fixing the sample time. The PIDs and LPFs have adaptive sample time, which means that they measure the time from the last call and on each call discretise the differential equation with respect to it. This produces PIDs and LPFs that can work for wide range of sample times.
If you look into the library code, you will see that we do not have any fixed sample time value in the code, this is very important because we know that this would not be possible for most of people. However if the motion control and the foc algorithm would be set in a timer callback with a fixed frequency ( which would fix the sample time ) then this adaptive method would still work fine, it would just have few lines of code that would not be necessary.

Here is a short doc on the PID thoery and implementation: PID controller theory | Arduino-FOC
And for the LPF: Low Pass Filter | Arduino-FOC

So a quick summary,

• SimpleFOClibrary’s motion control (`motor.move()`) uses adaptive time sampling strategy and can overcome different loop times by rediscretising the controllers and filters in each loop. For that reason downsampling motion control ,and sometimes longer loop times (as long as they are not too long), do not require you to change the parameters (because its adaptive)
• However longer loop times influence a lot the field orientation vector setting code `motor.loopFOC()` and this is probably the part of the code producing the biggest performance loss.
• Finally as we are not dealing with ideal control theory problems, but with real hardware, we have many different influences that complement each other. For example, longer sample times will for encoders and hall sensors allow for more impulses to be received and in that way produce better velocity estimation.
1 Like

I can’t possibly add anything to the control theory side of these questions, but I think I can add something about the “plumbing” side of things:

• you set a PWM frequency in your driver configuration. Generally PWM frequencies should be >25kHz to reduce motor noise and provide “smooth” signals, but there is a dependency on the driver stage (FET switching times) and other things which might force you to choose a lower PWM speed.

• the PWM frequency determines the maximum speed with which you can send changes in the waveform to the motor. 25kHz PWM means you can’t change the value more than 25k times per second, i.e. at most every 40µs.

• But changing the PWM level every cycle will not lead to very smooth behaviour, intuitively, since you need a few PWM cycles at a given “level” for the signal’s recipient to receive them and “smooth” them so that it “receives” that level as a analog equivalent. An Infineon App note I read somewhere put that number at 10 - you should have 10 PWM cycles at a given level before switching to the next.

• So then we’d be at 400µs…

• So by that logic the maximum loopFOC iteration speed that would make sense for 25kHz PWM frequency would be: 2500Hz - if you believe the 10 cycles thing.

• You can then relate this back to a speed in RPM - e.g. at 100rad/s the motor moves 0.04rad in 400µs - that’s already getting to the point where sensitive sensors could measure it.

• You can relate that back to the electrical angle, i.e. 7PP is 0.89rad per electrical revolution.

• When the angle moved per iteration period becomes significant in size compared to the electrical revolution, the system will become inefficient, because you’re not updating fast enough. You’ll stop being able to go faster

Does that make any sense?