Question: SimpleFOC Current sensing w.r.t. integrated current sensors response time seems to be a bit of a black box for me so I have a question regarding a current sensor response time selection.
Basically the decision is between two sensors, one with response time of 300ns and another with response time of 4us. Given the current sensing timing requirements, what are the requirements for response time, say, at 10kHz and at 50kHz? The sensing must be synchronized however I am not clear on the timing.
Edit: ACS725LLCTR-50AB-T vs MCA1101-50-3
Edit2: To put things in perspective, typical Op-Amp such as LT1999 Bidirectional Current Sense Amplifier (specifically designed for that purpose) has response time of 500ns (0.5us) vs. MCA1101-50-3 which has response time of 300ns (0.3us), and ACS725LLCTR-50AB-T has response time of 4us. The question is, where is the cutoff-time where a current sensor is not suitable anymore.
I’m also searching for a current sensor for my project.
At the moment I’m looking at this part MLX91221KDC-ABR-050-RE it has a 2us response time. But I’m not quite sure if this is good enough. I need sensing up to 400v.
Input from some more knowledgeable people in this matter would be appreciated.
I’m also interested in understanding exactly how to make inline current sensing work best within the FOC framework.
My current understanding is that with inline current sensing (which is what SimpleFOC implements) you don’t need to synchronise the ADC sampling time with the PWM, see here, from page 6:
“The main advantage of this topology is that the current can be read without a strong linkage to the PWM status and without timing limitations in case of very small PWM applied to the phase (low-side current sensing can only be executed while the low-side transistor of that leg is on). It also allows the detection of phase short-circuits.”
However, I also think I misunderstand something, because surely in order to get FOC to work well you need to sample the current within the phases at regular intervals otherwise the Q/D PI loops won’t function properly…? And as the current varies considerably within the phase over a single period what is the best integration time for the ADC when sampling the current in the phase - maybe this integration time doesn’t matter much if you get the sampling time precise?
On the surface the current sensing is really simple however when you start taking into account the timings of the PWM signal to high/low driver delay and then mosfet/igbt delay, rise, fall, or op-amp slew/stabilization time if using op-amps, unless everything is lined up perfectly, at that point I believe only the person who wrote the inline current sensing could answer that. This becomes so hardware specific that even though I read the code, the implementation as a whole — combined with the fact people use whatever hardware they lay their hands on — is rather opaque, at least to me.
The typical approach is to synchronise current sensing to the PWM. For low-side sensing, this is in fact a requirement because (as you have pointed out above) you must sense while the low side is open.
So for low side sensing there is a (somewhat complex) dependency between the PWM timings and the current sensing. Typically you solve this using the MCU’s timer hardware, which is almost always capable of emitting interrupts (or “MCU events”) during the middle of the low-side open period.
The duration of this period is given by the PWM frequency and the PWM resolution. So say you have 20kHz, and 1000 step resolution. This means your shortest on-time for the low-side FET is 1/20Mhz (1/1000th of a 20kHz period), so 50ns… You want to complete your conversion well within that time, before you enter the zone where the FET is shutting off.
That’s pretty short, but luckily you can compute the 3rd phase current from the other two, and the commutation is such that when one phase is on most of the time, the other two phases are on less of the time. So you can choose a cut-off, say 250ns, below which you don’t sense a phase, and instead calculate it from the other two.
For inline current sensing, you don’t have this problem. You could sense the currents any time, but as you have also pointed out in the comments above, you want it to be regular and with minimal delays. One way could be to just use timer interrupts to start the sensing at regular periods. Another way could be to use the main loop to drive it.
Some MCUs support using the ADCs in “sweep” or “bank” mode - i.e. reading all or multiples of the inputs of a ADC unit at the same time. Using this kind of mode will ensure that the current readings are “aligned” in time.
Thank you. I am slow on the uptake and will ask questions that seem obvious to anyone but me.
How did you arrive at the exact 1000 resolution number? Why not 100 or 10000?
Complete the conversion or complete the measurement acquisition of the signal? I mean the end point in time where the signal hits the wire, because that’s what’s important in the timing diagram, then you can take your time converting ADC and storing in the register, because the signal itself is already stored in the silicon and even though the outside signal had moved on, we can still keep processing the original signal which is “frozen in time” in a sampling “time capsule” by the silicon.
To me it seems you need to “acquire” the signal withing the 50ns, not “acquire/process/store” the result. Which relaxes the timing requirement as sample storage is nearly instantaneous and timing is negligible. So at 10MHz generic 32-bit MCU (like STM32) ADC Clock (which has nothing to do with the MCU clock speed itself which could be 100MHz or 200MHz for what we know), assuming sampling time of about one cycle (or 100ns) and conversion time of say 11 cycles, or 1100ns =~ 1us, the entire end to end until you “see” the value in the register cannot be under 1us, however, the acquisition time is only 100ns. Actually the real STM32 ADC processing time is 1.3us according to silicon specs, and sampling acquisition time is 80ns. So now you see why I am asking about the 1000 resolution number, the 50ns are extremely low as far as hitting the middle of the pulse, however, the MCU could produce a pulse with 10ns resolution, depending on the frequency and PWM resolution. So hitting the middle of a 10ns pulse with an 80ns sampling time will produce an incorrect sampling charge signal in the silicon when the MCU integrates the value for the subsequent ADC processing.
In other words, while theoretically the current sensing is straightforward on paper, the code and hardware implementation and actual motor and code setup decisions make the end combination so uncertain in my head when I construct my use case.
I am not sure I follow this, because I could hit an “off” period if I arbitrarily sample at any time period quite consistently. I need to trigger the sampling at the very beginning of the rising edge, high or low, and make sure sampling time is lined up with the actual current rising edge, and the sampling width is at least the duration of the MCU sampling cycle, which in our example is 80ns. So my timing must account for the delay in driver and gate, else I may hit the dead time at the beginning if I trigger on the PWM rising edge. Even if I interrupt at the middle of the PWM, at short pulses when a coil is at minimum I will miss the measurement window.
Which also leads to my first question, at what point the response time of the current sensors and opamps become so off that they are no longer applicable in this use case, since they introduce extra delay in the measurement, since you don’t measure the current directly, but using an intermediary.
Now you see my confusion.
If you could clarify, or may be point some obvious gaps in my reasoning, which are not obvious to me, it would be great.
Edit: One very important point, the case is when the motor inductance does not “smooth out” the current. In case where the entire system is so well designed (PWM->driver->transistor->coils) that the motor behaves perfectly, I am in full agreement that the inline current sense timing could be done at any point of the PWM control signal as the resulting current profile is just a “gentle wave” and the offset error is minimal, then even a really cheap current sensor would suffice given it smooths out the transient spikes which could be done with a simple filter. Of course since SimpleFOC is aiming at a very generic approach and we combine random MCU with drivers and motors and sensing hardware I feel a much more rigorous approach to the inline current sensing timing is required.
It was just intended to be an example, but actually 1000 is kind of the minimum resolution we aim for - this gives some “reserve” for the case that you’re using low voltage limits on a higher voltage power supply. 100 would be too low in this case. 10000 would be better, of course, but most MCUs can’t count that fast, even at pre-scalers set to 1.
Actually to be quite precise I should have said “start the acquisition”. And it’s only the sample time that is important, any post-processing can happen also after the FET closes.
This really depends on the MCU… a STM32G491 has a minimum sample time of 41.67ns - but faster sample times generally come at the expense of accuracy…
Yeah, that’s right, but unfortunately you really have to take the full sample while the low side FET is open. So the time you have is the duty cycle, reduced by the dead-time, divided by 2 (assuming you start sampling based on an interrupt from the middle of the pulse), and that’s all the time you have…
The strategy to make it work is to check the duty cycle associated with the measurement (or just to discard the value from the phase with the smallest duty cycle), and calculate that phase based on the other two values. There should be no point in the commutation where more than 1 phase has a duty cycle which is too low to allow measurement, so this should always be possible.
Well, in the case of inline sensing, the current can flow when either FET is on, but also via the FET’s antiparallel diodes.
TBH, I don’t have the experience to answer this well, I am sure there are others who know far more about this! In my mind, the motor does smooth the PWM, but you still need to sense fairly quickly, as the currents are varying continuously, and you’re trying to use them for FOC control.
Also, most designs I’ve seen use a R-C filter on the current sense amp’s output, which will also smooth things out.
Again I have to defer this question to someone more knowledgable. I assume a constant delay can be compensated somewhat in software, but jitter would be more of a problem. I’m not sure how big of a problem this really is in real life though - I think people mostly just use their MCUs at the fastest setting, and work with that output.
Preliminary test results indicate for inline sense the response time lag becomes a problem when the delay is over 3 to 5% of the FOC sinusoidal period which impacts the feedback loop. In other words, for a 5kHz PWM and 350Hz FOC which corresponds to about 1000rad/sec (~10,000 rpm) for a 7-pair motor the period is about 3ms (3000us) and the allegro current sensor ACS711 with 100kHz bandwidth has 10us delay which is negligible compared to the FOC period. In other words if we are using a current sensor for inline sensing for anything less than 30,000 rpm the feedback loop error is negligible.
Situation changes drastically in case of high or low sensing however I can’t imagine anyone using a hall-effect current sensor for that. In any event as @runger pointed out, even this could be side-stepped by adding an r-c filter, in which case the high side (in my case) gets reasonably well smoothed out even if I use an active load (power resistor instead of inductance). In my case a 10kO/100nF filter did the job. YMMV.
I have not confirmed these results when the motor is under load, so these are all preliminary. Caveat emptor.