PID controller performance affected by library version update

About a year ago I got my first FOC project working. The project is a reaction wheel inverted pendulum (hardware: motor: quanum 4008 gimbal motor; driver: simpleFOC shield v2.0.3; processor: STM32 F446RE; motor sensor: AMS AS5147; penulum angle sensor: AMS AS5147). Since then, I didn’t touch the code, and it’s been working just fine.

Recently, I’ve started looking to upgrade the project. Since I’m working on a different computer now, I just downloaded the latest SimpleFOC library (2.2.3) and tried to recompile my existing code to my board (STM32 Nucleo).

I found I had to make minor mods to my code to align with API changes between the latest release of the simpleFOC library and the version I’d originally been using (2.2.0). Specifically, I had to add a current_sense.linkDriver(&driver) line in the setup, and also two sensor.update() calls in the main loop (one for the motor sensor, one for the pendulum sensor).

This got my code back compiling / working. However, there seems to be one lurking change resulting from the updated library releases that I haven’t been able to identify. Basically, although my code does work, the PID controller I am using to balance the inverted pendulum is now (i.e. with the latest library release) performing far more aggressively than before - i.e. it’s like the controller gains are about 150% or something of what they were before. For clarity, I haven’t actually touched the values of the gains.

To figure out where the change happened, I downgraded my SimpleFOC library back to 2.2.0 (where the controller / pendulum work exactly how I expect), and then I incrementally updated. It seems the change happens between releases 2.2.1 (where I get expected performance) and 2.2.2 (where I get the aggressive controller behavior).

Now I could just re-tune my controller gains with the latest library and carry on, but it would be good to understand why this change in performance is happening. Does anyone have any ideas what changed between the mentioned releases that could result in such a performance change?

In case it helps, I include my code as follows:

#include <SimpleFOC.h>
#include <SparkFun_ADXL345.h>

//static const int noOfEnts = 10;
//int zVals[noOfEnts];
//int zValsNew[noOfEnts];
//float zSmooth;
//float zSum = 0;

float kdmod = 1; //prev 1
float P = 3; //prev 3
float I = 0;
float D = kdmod*1.8;
float pidLim = 2;
float pidRamp = 100;

float PV = kdmod*0.003;

float pendAngle;
float pendSetPoint = 4.62;
float pendAngError;

int onSwitch;

// BLDC motor & driver instance
BLDCMotor motor = BLDCMotor(11);
BLDCDriver3PWM driver = BLDCDriver3PWM(9, 3, 6, 8);
MagneticSensorSPI sensor = MagneticSensorSPI(AS5147_SPI, 10);
MagneticSensorSPI pendulum = MagneticSensorSPI(AS5147_SPI, 7);
InlineCurrentSense current_sense = InlineCurrentSense(0.01, 50.0, A0, A2);

// commander communication instance
Commander command = Commander(Serial);
void doMotor(char* cmd) {
  command.motor(&motor, cmd);

void doSetPoint(char* cmd) {
  command.scalar(&pendSetPoint, cmd);

void doRotationControl(char* cmd) {
  command.scalar(&PV, cmd);

PIDController balance_pid = PIDController{P, I, D, pidRamp, pidLim};

void doPID(char* cmd) {, cmd);

void setup() {

  // use monitoring with serial for motor init
  // monitoring port
  // comment out if not needed
  //motor.monitor_downsample = 0; // initially disable the real-time monitor

  pinMode(A1, INPUT);

  // sensor config
  sensor.spi_mode = SPI_MODE1; // spi mode - OPTIONAL
  sensor.clock_speed = 5000000; // spi clock frequency - OPTIONAL
  //sensor.min_elapsed_time = 0.0001;

  pendulum.spi_mode = SPI_MODE1; // spi mode - OPTIONAL
  pendulum.clock_speed = 5000000; // spi clock frequency - OPTIONAL


  // link the motor to the sensor

  // driver config
  driver.voltage_power_supply = 12;

  // link driver

  // initialise motor

  // current sense init and linking

  motor.voltage_sensor_align = 3;

  // set control loop type to be used
  motor.torque_controller = TorqueControlType::foc_current;
  motor.controller = MotionControlType::torque;

  motor.voltage_limit = 12;

  //motor.current_limit = 0.1;

  // align encoder and start FOC
  //motor.initFOC(1.69, Direction::CW);

  // subscribe motor to the commander
  command.add('M', doMotor, "motor");
  command.add('S', doSetPoint, "pendSetPoint");
  command.add('C', doPID, "PID gains");
  command.add('Q', doRotationControl, "Rotation control gain");

  // Run user commands to configure and the motor (find the full command list in
  //Serial.println(F("Motor commands sketch | Initial motion control > torque/current : target 0Amps."));


void loop() {
  // iterative setting FOC phase voltage
  onSwitch = digitalRead(A1);

// handle switch to enable and disable motor
  switch (onSwitch) {
    case 0:
    case 1:
      if (motor.enabled == 0) motor.enable();



  pendAngle = pendulum.getAngle();
  //Serial.println(pendAngle, 4);

  pendAngError = pendAngle - pendSetPoint; = balance_pid(pendAngError) + PV * sensor.getVelocity();

  // iterative function setting the outer loop target

  // motor monitoring
  // user communication;

In a way you answered your own question without knowing. If a change in the code leads to a better loop performance, this would impact the PID parameters. I guess this is inevitable for certain hardware combinations. You need to tune down the gain to match the updated code loop performance.

Similarly, this would happen if you switch the hardware or sensor. The PID must match the entire system loop performance.

@Antun_Skuric, @Runger, I guess there should be a warning that with each library upgrade users may expect to re-tune their system? Please let me know if my reasoning is correct.


That’s an interesting thought. I’m not quite sure if I’ve got my head to fully follow the logic yet. I think the bit I’m particularly uncertain about is the notion of “better loop performance”. I can’t quite imagine what this means and therefore how it will lead to a performance change. Here’s my (perhaps incomplete) thought process:

So at present the application is an inverted pendulum. As such, the PID control loop in question has an error signal defined by the angular deviation of the pendulum from the vertical, as measured by a rotary magnetic sensor. The control loop output is a torque demand to the motor, which is then fed through an FOC algorithm.

So in setup A (my previous setup using an older version of the Simple FOC library), let’s say the pendulum is falling in a specific way, generating a given error/time signal from the sensor. This signal goes through a particular tuned PID controller, resulting in particular control torque demand signal. The FOC loop then takes this torque demand profile, and attempts to deliver this torque profile through the motor. The actual torque profile executed by the motor determines the behaviour of the pendulum.

Now we consider setup B (identical to setup A in every way, except that an updated version of the library has been used). So if at a certain moment the pendulum in system B now undergoes an identical perturbation to the one it experienced during setup A, my logic about how the process precedes goes as follows:

  • by definition, the real (physical world) error “signal” is identical between the setups (i.e. the system is experiencing the same perturbation)

  • the sensor has not changed, so presumably it measures the error exactly the same between the two setups (i.e. it’s streaming exactly the same data over the SPI connection)

  • question: I don’t know much about how this sensor-MCU communication happens - is this a possible station at which a change in the library could lead to a change in system behaviour? i.e. could the sensor be reading the exact same physical motion differently between two library versions? i.e. if we plotted the sensor (pendulum) angle vs. time signals as read in each of the two setups, could they be different to any significant degree? My instinct is that this should not be the case, since it would imply that at least one reading was significantly wrong (there can only be one right reading for one specific physical motion profile (ignoring rounding errors, noise etc as insignificant)).

  • so assuming that we do get an essentially identical reading from the sensor between the two setups, next that signal is run through the PID controller. In my mind, this is essentially just a math equation, and of course that must be invariant. i.e if we put an identical signal into the PID equations, we must get the same signal out of it, regardless of library version.

  • so now we are at a point where, between both systems A and B, we are feeding an identical torque demand signal into their respective FOC loops. In reality, we observe that we are getting a very different torque output profiles (and thus different pendulum behaviour). On this logic, the conclusion would seem to be that there is a significant change between the torque demand tracking performance of the FOC algorithm between the library versions. Is this so? I must admit, while I never tried to make any measurement of it, I never previously got the impression that there was any significant torque demand tracking error in the performance of the previous versions of the library, which would make me surprised if there was such a large degree of improvement possible in the upgraded library.

And this is were I run out of ideas as to what “better loop performance” might, on an implementation level, actually mean. I can’t seem to identify a likely spot in the loop where this performance improvement might apply.

Incidentally, I fully agree that a change in hardware (e.g. sensor, motor, pendulum geometry etc.) is likely to necessitate a re-tune of the controller, since it (likely) represents a change in the system characteristic. I’m less sure about expecting a change in the library to represent such a change, unless there is a meaningful change in what the library was actually doing (which I think is what I was originally asking to try and find out). In my best understanding, any dynamics of the part of the system governed by the background execution of the code would be so fast as to be functionally immaterial to the dynamics of the actual pendulum system, which I’d expect to be dominated by the physics of the hardware, together with the controller gains. This is the origin of my expectation/assumption that something that the code is actually doing must have altered, rather than simply the control loops doing the exact same calculations but faster.

Have I missed something in my understanding?

For reference, ideas that had passed through my mind that may lie behind the change I’m seeing include that something in the library switched the default units it interprets a given signal to be expressed in (unlikely), or perhaps slightly more likely some change in how differentiation is being calculated (i.e. for velocity / derivative calculations) was implemented. Just some ideas, and probably wrong, but this the kind of thing I was hoping someone might be able to shed some light on.

Strictly speaking the gains should take into account the sample times. Can you compare the loop time between 2.2.0 and the latest?

Commenting just on the question of loop performance, not on your setup or the library changes:

of course the loop speed has significant impact on the overall system performance! This is true despite the fact that our algorithms are written in a way that takes time into account, i.e. don’t assume exactly regular calling intervals.

There are probably more reasons and better explanations than mine, but the basic reason is that the loop iteration speed determines the “sample time” of the whole setup, which is closely related to the bandwidth.

Your motor is turning at some speed, and has a fixed pole count, which together determine the speed in electrical revolutions and hence the frequency of the commutation signal and the speed at which the controller updates this.

One way to imagine this is to picture the 3 sine waves that make the commutation, and now think of the controller “creating” this sine wave by setting “points” - the y resolution is determined roughly by the number of different voltage levels you can set, which is determined by the MCUs PWM hardware and the PWM frequency. The x resolution for these points is time, and is determined exactly by the loop iteration speed - there is one update to the PWM signals via loopFOC() per loop iteration.

On the sensing side, the same applies - the resolution and latency of the sensor is related to the x-axis precision but the loop iteration time determines the sample rate.

As the loop iteration speed changes, both the sample time on the output and the sensor input change, and this affects the behaviour of the system. At the limits the changes can be very significant. It is easy to imagine that on a slow MCU with slow iteration speed, you can’t drive the motor faster than a certain limit simply because you can’t “place points” in the x axis quickly enough to generate sine waves of the needed frequency.

Did that make any sense?

That said, I think there may have been other changes to the code that are more causal than the loop speed, but I am not sure.