Since the zero offset is already calculated by the algorithm anyways, the only thing required would be the maximum current the sensor is rated for, and scale the analog voltage to the current based on whether the sensor is bidirectional or unidirectional (for low-side sensing).
I agree, and even though it isn’t a hard calculation, I always manage to confuse myself and get it wrong…
I don’t think that’s sufficient… the rated current just defines the upper bound of what the sensor can handle.
The output is ratiometric around half the supply voltage, and as you say we find the zero point in the code. But it would be an assumption to say that the 30A is the exact range of the sensor, and to then determine the sensitivity from that we would additionally need to pass the sensor’s supply voltage…
To use the ACS712 as an example, with 30A range and 5V VCC, we would get a sensitivity of 0.083V/A = 83mV/A. But the sensor’s datasheet lists the sensitivity of the 30A model at 66mV/A… apparently they allow a bit of head-room, since VCC could be 4.5-5.5V.
I would propose we create a new constructor which takes directly the sensitivity in mV/A as the parameter. This is a value that should be in the datasheet of most Hall effect current sensors, I would guess.
Your proposal is better, as it allows for using the value from the documentation and matching the parameter. If I use the 30A version, the rated 30A bidirectional swing at 66mV/A corresponds to exactly 4V supply voltage, and 37A at 5V so the advantage of using the sensitivity vs range is obvious.