How do you improve slow speed control?

I like you to study this video, with a comparison of three BLDC motor drives. At low speed, they all got a a jerky motor rotation movement. Why is that so?

I guess, that they all use an angular hall-magnetic sensor to measure the angle of the rotor. I furthermore guess, that this angular sensor is more accurate, so that in itself should not cause this jerky movement.

At 7:09 in the video, it is explained, that the O-drive perform an initial calibration, which includes a slow speed. But this movement is not jerky.

Have you tried something similar with the FOC software and some controllers? Can you make the motor go slow without a jerky movement, like you see in the video?

I am interested in the problems involved with slow speed control of electrical motors, and how it can be improved.

That kind of steppy motion is natural for velocity control on a motor with significant cogging torque. You may be able to reduce it some by increasing the PID gains so the velocity controller responds more quickly to the variation caused by cogging, but it may just become unstable instead.

What I would do is use the angle_nocascade mode. The cascaded style performs better with inertial loads, but if you have another layer of control on top, it can give smoother and more precise positioning. Like for CNC machines, the controller generates acceleration ramps so inertia is already accounted for.

If you need smooth velocity control at low speed, increment the target angle similarly to how velocity_openloop does it. Two caveats:

  1. Stop incrementing the target if it gets some distance beyond the measured angle, to prevent windup if the motor is stalled.
  2. Add a way to wrap the angle back when it gets large due to the limited numerical accuracy of floating point.

I would like to know what the ODrive is doing to get smooth calibration motion. Thankfully the v3.6 is open source, so I may go digging to find out…

Thanks for your reply.

I think some of the comments to the video on YouTube is interesting as well. @boobelan is suggesting, that the ODrive use a higher current in open loop drive during calibration and therefore the cogging moments becomes relatively smaller.

I agree, that higher gains in the position and velocity control loops should decrease this jacking caused by cogging. One way to have higher gain is to increase the control loop sampling and update frequency.

I got a suggestion too. I guess you use the encoder to output an angle for your controller. But part of that is that there is a bit of noise on this angle. You also need a speed signal to the controller, which calculated by subtracting the previous measured angle a short time before (depending of control loop frequency). When you do this differentiation of the angle signal, the speed signal will have significant noise. It is increased by higher control loop update frequecy. When you have the encoders own measurement cycle unsynchronized to the motor control loop, you get errors or noise from that too.

Update frequency limits your top speed, but sensor noise (encoder and current sensors) is usually the limiting factor for how fast the PIDs can respond to changes.

The Sensor class has a variable min_elapsed_time to limit how often the velocity gets recalculated (default 1 millisecond) to reduce the effect of angle noise. Then velocity gets lowpass filtered by the motor class, which further reduces noise but also reduces how quickly the PID can respond to changes in speed. The default time constant is 5 milliseconds. Then the current sense is also lowpass filtered with default 5ms, which I think means there will be a 10ms delay between measuring the speed and applying torque in response. There’s an optional lowpass for the angle too, but it’s disabled by default.

The defaults are chosen to give decent results for beginners on a variety of hardware, not for maximum performance. With good hardware, you can use lower filter constants.

Antun did some testing of INA240 versus AS712 a while back, and got much better results with the INA240. And presumably that was taking one sample per update, so it should be even better with oversampling. Maybe good enough to eliminate the need for lowpass filtering entirely on the current. Of course the shunts would need to be much lower resistance for high current motors, but should be able to get the same precision proportional to the overall range.

Another thing that’s been on my agenda for a long time is to combine the voltage-based current control with hardware current sense. I think that would be considered feed-forward. Predict the necessary voltage so it responds faster, but use measurement and PID to correct the error so it doesn’t explode if you guess wrong.

Perhaps you already know that I do not have practical experience with the SimpleFOC software nor BLDC drives. But I do have a fair amount of experience with power electronics and motor drives in general. I hope my comments here do not become too irrelevant for you.

I am surprised to learn that the measured motor current is being filtered by the software with a time constant of 5ms, before it is used in the current control loop. It seems odd to me. Are you sure this is correct?

If you do that, then of course you cannot have a responsive current control loop. Why should it be necessary to filter the current signal? Is the sampling of the current signal not syncronized to the switching of the power transistors? Otherwise, I can believe that the power switching transients are disturbing the current measurements. I know that I was commenting about this issue in this other thread regarding a driver for stepper motors:

https://community.simplefoc.com/t/low-side-current-sensing-for-stepper-motors/7235/39

I can imagine the need to filter the speed signal with a time constant of 5 ms, but again I think it will cause severe problems for a fast speed control loop. I know that a small, brushed DC-motor used as tachogenerator can provide a speed signal with much less noise. If have used it for sewing machines, and with two amplifications of the analog value to provide a wider dynamic speed range. But a brushed DC-motor don’t provide a shaft angle that is needed for FOC.

I have been considering a small BLDC-motor could be used to provide an angle and speed signal and again with two amplifications to increase dynamic range. I have tried to look at the EMF-signals from such a motor with one terminal at a fixed center voltage and then you sample the signal from the two other terminals. This is the test set up:

This is the result:

The BLDC generator got 12 poles and got Kv=4300. It is rotated at a speed of 1830 rpm. It then makes an electrical frequency here of 183 Hz.

Provided that you have two sine waves 60 degrees apart, you can calculate the peak value of the current sine wave to be:

EMF peak = 2* sqrt( (v1^2 + v2^2 – v1*v2) / 3).

Gray curve = sqrt(V1^2 + V2^2 – V1*V2).

The gray curve got some ripple, and I think it is mainly caused by the measured voltages from the BLDC generator deviating from being sine waves. But it is a fair signal and with a relatively high ripple frequency.

Using this motor, I find it possible to get a noise in speed signal below 0.05 rpm and you should be able to track a reasonable shaft angle at speeds below 0.5 rpm. You got a risk that you can lose track of angle by very low speeds, but I guess that you may have some counter measures by providing a small ping signal to the main motor causing a small movement now and then to ensure that you keep track of the angle. Otherwise, you will need some recalibration. You will need two consecutive measurements to get information on speed directions.

Specifically it is the d and q current that are filtered, not the raw sensor values.

So the transformation by rotor angle is done before filtering, meaning it should be able to follow the rotation without lag, but changes in amplitude will be slowed down, limiting how fast the velocity PID can get a response out of it. At least I think that will be the effect.

Sampling is typically synchronized with the transistors using the LowsideCurrentSense class. It works for inline sensors too, and is better optimized on STM32 which has a very slow implementation of analogRead used by the InlineCurrentSense class. But the inline class works on more platforms, because lowside typically requires a specialized ADC setup for each one, and some hardware can’t do it at all.

That said, it would be nice to make some specialized implementations of InlineCurrentSense too, which do oversampling to improve the accuracy since hall effect current sensors are usually quite noisy. In that case the sampling is not synchronized with the transistor switching, but hopefully any spikes from switching will get drowned out in the mix. It would also be possible to sample into a buffer and filter out spikes before averaging, although it would take more CPU time than using STM32’s hardware oversampling.

That is an interesting idea using a DC motor as a speed sensor. If you need really precise velocity control, you could use that in addition to a position sensor. Or perhaps with the flux observer so all you need is current sensors and the DC motor velocity sensor.

On your BLDC signal image, the gray line has 6 ripples per cycle of the sine waves, so I think those are the cogging steps. The winding calculator gives 36 steps for 12 poles, so 6 per pole pair.

Benjamin Vedder’s VESC firmware uses a technique called HFI (high-frequency injection) to keep track of the angle at low speed by sending a ping signal, but it hasn’t been implemented in SimpleFOC yet.

No, I do not find, that cogging moments cause the ripple in the grey curve. The test set-up consists of a much larger brushed DC motor with a lot of cogging moments too. It has got five rotor poles, and therefore the ripple in torque gets a frequency of 10 times the rotation frequency. These moments will dominate the small cogging moments of the BLDC generator. When I feel the cogging moments of this BLDC generator, it is not that consistent but mostly got 12 preferred positions each rotation. The ripple frequency you see in this speed signal is 36 times the rotation frequency. The raw electrical signal got a frequency of six times the rotation frequency due to the 12 magnetic poles. In this test, the motor runs quite fast, which causes the relative speed variations to become small due to the inertia of the DC motor.

When you look at the raw signal, it is not a sine wave. It got a flatter top voltage, and the slope of the curve near zero voltage is a bit too low. You may be able to compensate for this ripple, because it is likely to be consistent with the same BLDC generator. It will however need further tests to be confirmed.

Since I am not that much into this FOC concept, I do not understand what this filtered current is used for. If you got a 14 pole BLDC motor with a speed of 6000 rpm, then the main frequency voltage and current to the motor will be 700 Hz. A filter time constant of 5 ms will roll of this signal from 32 Hz.

I think we agree that the cogging moments will cause significant relative speed variations at low speed, and you need to be able to change the motor currents fast to reduce the variations. It is the same for DC-motors with cogging moments, and it might be a bit easier to understand the problem using them. I got a measurement of that, with a speed controller that struggles to keep a constant speed. This is an example with a speed of 120 rpm with a 2 kHz update frequency and with 4 ms between samples in curves:

You don’t like to have the speed (tacho) signal noisy, when you want to use the D part of the PID controller for speed.

This BLDC generator provides a better speed signal with a higher ripple frequency than the DC-motor-generator I used. But I have not yet developed the software to handle negative speeds and the zero crossing of speed for such a BLDC generator.

This generator or motor got 12 poles and 9 inner stator poles. I have noticed, that a bit larger motor got 14 poles and 12 inner stator poles. Perhaps this kind of configuration creates a nicer sine-wave EMF curve, than the one I made the measurement on.

Another BLDC motor can be used as a position sensor, but it has some disadvantages:

The main one would be that the BEMF signal becomes insignificant at very slow speeds, so for your initial purpose of running motors very slowly but smoothly it isn’t very suitable.

The other problem is that you’re effectively using your ADC to measure it, so depending on MCU you have 10-14bits, typically 12 bits. But you have to scale your output from the BLDC so that your top speed BEMF is measurable, as both voltage and frequency increase with speed. so again at the low end of the speed the signal is more and more noisy…

Compared to this normal encoders or magnetic sensors as they are usually used can have higher resolution (some have 16 or even >20bits) and don’t suffer from weak signals at low speeds…

Yes, I agree, that using an EMF generator got disadvantages, but it is likely the only way to get a speed signal with fast response and low noise at low speed. Yes, you will need to use four analog inputs with different amplification or to shift amplification before the AD-converter depending of speed. I estimate that you will be able to keep track of position until very low speeds of about 0.05 rpm. But yes, you got a limit there.
Higher resolution from hall element sensors do not automatically mean lower noise. I agree, that better existing encoders may perform better than you see in the video above.
I hope someone with a good encoder will try the same test as shown in video above - and with a motor with cogging moments.
The problem with the normally used not synchronized XY hall element sensors with AD-converters, can be solved by using a XY hall element sensors with no AD-converter, and make use of the AD-converters in the MCU.

The problem is that this sensing scheme has less resolution the slower you go, unless you were to somehow dynamically adjust the gain of your amplifier stage.

So while the motor is turning fast, you’ll use the full range of the ADC, for arguments sake, with lets say 12 bits of resolution, corresponding to 2^12 voltage levels and a angular resolution of 0.08deg.

But now as your signal drops because the motor is turning slowly, you’re left with only a fraction of the ADC resolution. If your signal is now 100mV pp compared to 3.3V before, it spans only about 128 ADC counts, and your angular resolution has dropped to 2.8deg

There are sensor types that aren’t subject to these disadvantages…

specifically, magnetic position sensors would have their full accuracy as you turn slowly, they tend to lose accuracy as they turn faster due to bandwidth limitations.

capacitative encoders would similarly work well at slow speeds, as their excitation signal strength doesn’t depend on the speed.

These solutions come pre-packaged in smaller size that you could easily achieve with a motor winding, and would probably come cheaper as well by the time you’ve got the “sensor motor” and its supporting amplifier together.

But of course as an experiment in making your own sensor it would still be cool to do :slight_smile:

STM32G431 has variable-gain opamps, but it’s trouble dealing with the bias voltage for measuring signals that can go negative, since it gets scaled along with the signal. I haven’t tried this, but I think you could vary the bias by connecting the DAC output to the OPAMP_VINM pin. e.g. if your sense motor outputs ±5V at full speed, use a resistor network to scale that to 0.3-3V (some safety margin from frying the pins) and input to OPAMP_VINP. Then if using 4x opamp gain, the signal range will be equivalent to 1.2-12V with center at 6.6V. So to get back centered at 1.65V where the ADC can see the signal, you need to subtract 4.95V, so set DAC to output 4.95V/4=1.2375V.

Reading about it in reference manual RM0440, it looks like you can internally route DAC3_CH1 to OPAMP1_VINP and DAC3_CH2 to OPAMP3_VINP, so in that case maybe the signal should connect to VINM. The math would be a bit more tricky, but it would simplify routing on the PCB.

Yeah, but you also have to keep in the common mode range of the opamp, which for the G4 is only 0-VDDA…

I think I’m not really getting it.

Adding an offset to VinM which isn’t present on VinP would just kind of have the effect of raising the rail of the opamp? Since it can’t go negative in this case?

I’ve never quite understood the meaning of the term “common mode”, but I think this scheme should be safe as long as the sense motor isn’t spun significantly faster than the maximum design speed. Both the input and DAC are within the 0-3.3V range, so only the amplified sum may go outside, which should just saturate.

The goal is to use the DAC to offset the amplified signal down so the interesting portion of the large theoretical voltage range is within the 0-3.3V window that the ADC can see.

The common mode range is the range of voltage that all inputs need to stick within. I understand now that if you set your resistor divider according to the max expected voltage then you keep within the common mode input range.

Changing the gain will scale the 0 offset along with the rest of the signal. But I think to increase the resolution you need to scale the VREF from the DAC, not the VinM…

The VinM would just shift the bottom rail relative to the signal you’re trying to measure, which is itself relative to VSSA… so for example if you set VinM equal to the 0-offset (relative to VSSA) you would just end up not being able to measure the negative currents, and have the same resolution (determined by VREF relative to VSSA) as with the lower gain…

Let me draw a picture…


So to get the input signal within 0.3-3V range, we use a resistor network to scale it and bias it by 1.65V. But if the signal is weak, we need to amplify it, and that 1.65V bias gets amplified along with the signal up to 6.6V in this example. So our weak signal is varying up and down around 6.6V, totally saturated from the ADC’s point of view.

So we need to somehow subtract 4.95V from this amplified signal to get it back within the ADC’s range. And since the opamp VINM gets scaled as well, we can use 4.95v/gain=1.24V from the DAC to do it.

Dynamically changing VREF would give a similar scaling effect to the opamp, but then there’s no practical way to correct the scaled bias.

Hmmm… so you’d adjust the lower rail dynamically to recenter the input while also adjusting the gain. I guess it can work in theory. I think I like adjusting just the VREF better, but I didn’t look into the limits on that, so there could be good reasons not to do that either.

Either way I can’t say that it sounds easy. Better to devise a sensor scheme that doesn’t have these issues…

When I made the measurements, I made use of a “Center voltage supply” It will provide approx. 1.65 V with a 3.3 V supply voltage or a 2.5 V supply voltage at 5 V supply voltage. This voltage is connected to one of the three terminals of the BLDC generator.

In order to remove offset from the amplifiers, you need to measure all four inputs with the motor standing completely still. This is then the zero value to be subtracted, when you use the signals from the AD converter. I hope this makes sense.

You cannot reduce measurement bandwidth at slow speeds, because then you cannot have the fast response in the control loop as you need.