Anyone interested in collaborating on anti-cogging (lowest hanging fruit for smoother motion)

Glad you liked the revelation regarding permanent magnets and lack thereof in steppers :). A tmc2208 board is like 5 bucks on aliexpress, just so you know. They give relatively smooth motion because they are basically using 256 microsteps. That’s why. The actual amount of motion distortion should be extremely small even with 32 microsteps, unless this is some kind of ultra precision milling machine. It’s cool to hear about though because I used to be a cnc technician working with mills a lot. They usually had brushed or brushless servos with e.g. 512 pulse per rotation encoders, if they used an encoder. A stepper even without an encoder usually gets 200 steps without microstepping, if you use like 8 microsteps you are already getting higher resolution than any encoder I have seen will give.

Silent stepper motors really are about audible noise, not smooth motion, applications which require more resolution than a normal stepper driver can give are relatively rare.

The main advantage in my work of encoders and steppers was the lack of concern regarding skipped steps. We had shopbots for instance that would hit screws during routing, and would just be forced around the screw and carry on, completing the job fine except for that one little defect. A stepper based machine without encoders would have lost steps and had the whole job messed up.

my first successful result with calibration. Now I’m afraid to disconnect the microcontroller from the power supply ))))It would be great if there was an opportunity to save calibration data…


Of course, there should be a way to save calibration data :slight_smile:

But unfortunately, its a very complicated MCU and hardware-specific question of how to do it.

One option is via the PC - this can be simple to implement: dump the data to the serial console, and then on the PC convert it into a C array structure which you put back into your program.

To do it on the MCU, you would need a non-volatile memory.
On ESP32 we could use the SPIFFs “filesystem” method to save and load a file with calibration data to a flash partition.

On other MCUs it is not so simple, and there would probably have to be an EEPROM / Flash chip on the driver board to store it. This is a topic already quite user-specific, and not close to motor control or the SimpleFOC hardware. So I am not sure that we will prioritise implementing it… :frowning:


Hey, and your setup looks really nice, the motor is turning very well I think!

1 Like

I like the way it spins now. I manage using adcread, which is implemented in the library for esp32. It works great, but there is noise, I guess we need to limit the number of reads per second. I’ll try.

lol, this is like 5 lines of code in micropython ;). I’m making process with the waveform generator. I stole an idea from trinamic, they use a system which ramps the angle based on the time since the last angle update. It helps a lot to smooth the angle out and yet still gives precise motion. They use it for stepper motors but it would work here too.

Well, no, I don’t think micro python can save anything if there is no writeable memory to store to… It can’t do magic and being able to store calibration data on the MCU still depends on there being a non-volatile memory available for it.
But micropython can make things a lot easier where people have prepared libraries and hardware integrations for it.
And many people find working in python easier than in C. In addition the way to program the MCUs when using the micropython-enabled boards is often very easy and intuitive. All these attributes make Micropython a great choice for many users and applications.

But the disadvantage is that it is 100x slower than C/C++ code, and so complex real-time things like motor control don’t work well in it. Here you’re dependent on being able to plug in a hardware-specific library to do the fast tasks for you, and that library will most likely be written in C/C++… :wink:

Nah, it’s teh magicz! Jk, jk, Yeah, ok, fair enough. And yes it’s slowness is of course why I am here. However there is a video by Damien whatever where he shows you that with a couple methods, it’s only about 1/33 the speed of C ;-). There are supposedly ways of making modules in C that you can use in Python but I’m betting it gets complicated. Anyway, off topic I guess, sorry.

I would like to experiment with anti-cogging in the future.
I understand that saving the calibration is challenging because of the amount of data.
But the cogging map is not random right ? Is there no pattern that could be used ?

if the calibration data is saved, does this not mean that the engine rotor must be set to a position equal to the position from which the calibration was started before reading this data? otherwise, after restarting, the rotor may be in a random position. Now I mean encoders in abz mode. I used the esp32encoder library, but in the examples I didn’t see how I could connect the z search mode…

I can imagine that it’s related to a combination of the bearings, windings, and each of the unique magnets used for pole pairs, so the variation per motor is probably pretty significant. If you look at the cogging unwrapping from husky it looks like it has some asymmetry, although you might be able to just use a symmetric pattern wrapped over each electrical revolution and get 90% of the way there.

Yes, you have to know the absolute position of the rotor.

Yes, ABZ encoders need an index search to find the encoder zero position, and then it is possible.

It’s not random, but it is unique to each motor, at least to some degree. It will depend on the physical properties of the motor, but also the setup (sensor used, magnet used when working with magnetic sensors, and perhaps other factors).

I think the calibration code we have in the drivers repository would be an excellent starting point for such work…

Having looked through the calibration code things, it’s great for what it does, but I don’t think it would help much with anti-cogging during actual rotation at significant speed. Cogging at lower speeds for precise angles is mostly taken care of by the sensor itsself and the control loop, I think. It depends a lot on what you want to do. For me, I want smooth continuous torque to keep the noise down. People making smart knobs want something similar but at much lower speeds. People with higher power motors may wish to reduce vibration to more practical levels to reduce noise and wear, perhaps. People making robots are probably concerned with more intermediate speeds I would think, which seems to me to be the hardest problem.

The information from the current sensor calibration would not help much with that, except to use it for subsequent data gathering because you have a more accurate sensor.

My understanding is that cogging is mostly about torque, which will bias the rotor position relative to the electrical angle, and which will cause acceleration of the rotor to vary depending on it’s angular position. There are some papers on google scholar that discuss some methods, and on hackaday some people doing lower cost experiments to compensate for cogging and there is definitely a ton of promise. However the measurements need to be taken while the motor is moving at considerable speed I think, not the very slow motion currently used by the angle calibration.

What the angle calibration code appears to do is command the electrical angle to X position, and then give it a short time for the rotor to assume the equilibrium position. It then assumes that the rotor is in fact in that position, the same as the magnetic field, in the absence of cogging, would lead to. It builds it’s lookup table. then during run time, it uses the table backwards, taking the angle reading and going back to what the electrical angle would have been. And there is some clever stuff with interpolation between the calibration table points which is definitely good.

However I think this might actually be worsening the actual motion of the motor, because during the calibration the motor is not in fact going to the position commanded. You are essentially baking in cogging, not undoing it. I think if you took a proper encoder or something and used it to compare the angles of the magnetic sensor that was “calibrated” to the readings of the encoder you would actually find that the calibrated sensor is better in some ways but worse in others. However this is just a hypothesis and I’m sure there is a good chance I’m totally wrong.

I think a better approach to anti cogging is to carefully align the sensor and magnet, and to rotate the motor at significant speed, ideally with a circular disk attached to increase angular momentum, and adjust the voltage waveform until the sensor reads the motion as being as smooth as it can be, measured by the area under the curve or something. There may need to be different waveforms for different speed regimes. You are compensating for acceleration ripple - torque, assuming the mass of the rotor and it’s geometry/moment of inertia is the same at all times.

Then you would calibrate the sensor by rotating it a ton of times at considerable speed, and carefully timing the intervals between reads, and assume the rotational speed was smooth and constant, in order to adjust for sensor eccentricity/magnet misalignment. Take lots of samples and average things out, to reduce noise. You only have to do this once for each sensor/motor assembly.

I don’t know how much you really need to do this, I think if the sensor and magnet are reasonably properly built it’s probably going to be pretty good. Using a larger diametrically magnetized magnet helps too, I have read. I honestly don’t know why these sensors are so touchy, it seems to me that a compass works pretty well and they should compare favorably, but… instead they need the magnetic field to be neither too strong or too weak etc.

I agree, the measurement needs to be done while the motor is spinning with some inertia. A while back I was doing some measurements in SinePWM open loop angle mode to see exactly when my hall sensors were changing state versus when trapezoid sectors would change, and the rotor lagged behind the magnetic field by 5-8 electrical degrees when turning at very low speed (the hall change angles were 10-15 degrees different depending on rotation direction).

It makes sense because the tangential magnetic force drops off as the rotor approaches alignment with the stator field. You’d have to give it a pulse of high current to pull it close to perfect registration. But as you say, while the motor is spinning you can measure the acceleration ripple and use some algorithm to progressively adjust the table values up and down to minimize it.

To reduce memory usage in exchange for computation time, the table could probably be 4 to 8 entries per pole pair with interpolation. Presumably it will be of a relatively sinusoidal nature, and when I was playing with sine/cosine lookup tables the other day, interpolation gave comparable error to 20x higher resolution without interpolation. So for a 7pp motor, a 28 entry table should work about as well as 560 entries without interpolation. For best results it will need some care to try and get the phase right so the table points to line up with the peaks of the wave.

You don’t necessarily need non-volatile memory. You could run the calibration, print the results to serial, and then paste into the program and recompile.

EDIT: On second thought, do we even need a unique table? If the acceleration ripple is nearly sinusoidal, it may be good enough to use the existing sine function to generate the cancellation waveform, and just tweak the scale and phase offset until acceleration ripple is minimized. So the anti-cogging process would be something like voltage.q += _sin(electrical_angle + anti_cog_phase) * anti_cog_scale;

Just sharing the initial article for reference. A lot is said there.

Is this what you refer to as existing calibration code ?

Very interesting reference!

No, this one:

It does a linearity calibration, but its intended use was to compensate misalignment of the magnet in a magnetic sensor. So the non-linearity introduced here is on the level of the full rotation of the magnet, so lower frequency than the cogging.

With a more fine-granular LUT the calibration routine may also do something for cogging…

Is this also compensating for the deadtime ?
EDIT: adding another link

Sorry I hope it’s not off topic, but I think all those topics are connected and can reduce torque ripple/noise.

Indeed, excellent article! I haven’t read the full text yet, but the images look like the ideas in my previous post are no good. The waveform is rather complicated, not something that can be approximated using a coarse interpolated lookup table or scaled sine wave. And it varies quite a bit for each electrical revolution, so there’s probably no way around consuming a lot of memory.

No, it does not, and thank you for the interesting link!!

Compensating dead-time on the PWM level is a quite advanced topic in my opinion, and will be very difficult and MCU specific to implement.

The LUT approach cannot work for this, because the PWM is not aligned/synchronized to the physical rotor position in any way. At each revolution the dead-time events can happen at different angles of the rotor.

Actually it’s not too complicated to compensate, you just need to add or substract (depending on the sign of the current) a compensation constant to the duty cycle.
You know if you need compensation when your current curves look like this (old screenshot from vesc):

This also impacts the open loop angle.

But that might be another low hanging fruit that deserves another thread.